Planet Haskell

October 02, 2023

Gabriella Gonzalez

My views on NeoHaskell

My views on NeoHaskell

Recently Nick Seagull has announced a NeoHaskell project which (I believe) has generated some controversy. My first run-in with NeoHaskell was this post on cohost criticizing the NeoHaskell project and a few of my friends within the Haskell community have expressed concern about the NeoHaskell project. My gut reaction is also critical, but I wanted to do a more thorough investigation before speaking publicly against NeoHaskell so I figured I would dig into the project more first. Who knows, maybe my gut reaction is wrong? 🤷��♀�

Another reason NeoHaskell is relevant to me is that I think a lot about marketing and product management for the Haskell community, and even presented a talk on How to market Haskell mainstream programmers so I’m particularly keen to study NeoHaskell through that lens to see if he is trying to approach things in a similar way or not.

I also have credentials to burnish in this regard. I have a lot of experience with product management and technical product management for open source projects via my work on Dhall. Not only did I author the original implementation of Dhall but I singlehandedly built most of the language ecosystem (including the language standard, documentation, numerous language bindings, and the command-line tooling) and mentored others to do the same.

Anyway, with that out of the way, on to NeoHaskell:

What is NeoHaskell?

I feel like this is probably the most important question to answer because unless there is a clear statement of purpose for a project there’s nothing to judge; it’s “not even wrong� because there’s no yardstick by which to measure it and nothing to challenge.

So what is NeoHaskell?

I’ll break this into two parts: what NeoHaskell is right now and what NeoHaskell aspires to be.

Based on what I’ve gathered, right now NeoHaskell is:

However, it’s not clear what NeoHaskell aspires to be from studying the website, the issue tracker, or announcement:

  • Is this going to be a new programming language inspired by Haskell?

    In other words, will this be a “clean room� implementation of a language which is Haskell-like?

  • … or this going to be a fork of Haskell (more specifically: ghc) to add the desired features?

    In other words, will the relationship of NeoHaskell to Haskell be similar to the relationship between NeoVim and Vim? (The name seems to suggest as much)

  • … or this going to be changes to the command-line Haskell tooling?

    In other words, will this be kind of like stack and promote a new suite of tools for doing Haskell development?

  • … or this going to be improvements to the Haskell package ecosystem?

    In other words, will this shore up and/or revive some existing packages within the Haskell ecosystem?

Here’s what I think NeoHaskell aspires to be based on carefully reading through the website and all of the issues in the issue tracker and drawing (I believe) reasonable inferences:

NeoHaskell is not going to be a fork of ghc and is instead proposing to implement the following things:

  • A new command-line tool (neo) similar in spirit to stack
    • It is proposing some new features not present in stack but it reads to me as similar to stack.
  • A GHC plugin that would add:
    • new language features (none proposed so far, but it aims to be a Haskell dialect)
    • improved error messages
    • some improvements to the UX (e.g. automatic hole filling)
  • An attempt to revive the work on a mobile (ARM) backend for Haskell
  • An overhaul of Haskell’s standard libraries similar in spirit to foundation
  • TemplateHaskell support for the cpython package for more ergonomic Python interop
  • A set of documentation for the language and some parts of the ecosystem
  • An event sourcing framework
    • … and a set of template applications based on that framework

And in addition to that concrete roadmap Nick Seagull is essentially proposing the following governance model for the NeoHaskell project (and possibly the broader Haskell ecosystem if NeoHaskell gains traction):

  • Centralizing product management in himself as a benevolent dictator

    I don’t believe I’m exaggerating this. Here is the relevant excerpt from the announcement post, which explicitly references the BDFL model:

    I believe that in order for a product to be successful, the design process must be centralized in a single person. This person must listen to the users, the other designers, and in general must have an open mind to always cherry-pick all possible ideas in order to improve the product. I don’t believe that a product should be guided by democracy, and neither it should implement all suggestions by every user. In other words, I’ll be the one in charge of generating and listening to discussions, and prioritizing the features of the project.

    I understand that this comes with some risk, but at the same time I believe that all programming tools like Python and Ruby that are very loved by their communities are like that because of the BDFL model

  • Organizing work via the NeoHaskell discord and NeoHaskell GitHub issue tracker

I feel like it should have been easier to gather this concrete information about NeoHaskell’s aspirational goals, if only so that the project is less about vibes and more a discussion on a concrete roadmap.

Alright, so now I’ll explain my general impression of this project. I’ll start with the positive feedback followed by the negative feedback and I’ll be a bit less reserved and more emotionally honest in my feedback.

Positive feedback

Welcome contributions

I’m not the kind of person who will turn down someone willing to do work to make things better as long as they don’t make things worse. A new mobile backend for Haskell sounds great! Python interop using TemplateHaskell sounds nice! Documentation? Love it!

A GHC plugin is a good approach

I think the approach of implementing this as a GHC plugin is a much better idea than forking ghc. This sidesteps the ludicrous amount of work that would be required to maintain a fork of ghc.

Moreover, implementing any Haskell dialect as a GHC plugin actually minimizes ecosystem fragmentation because (similar to an alternate Prelude) it doesn’t “leak�. If one of your dependencies uses a GHC plugin for the NeoHaskell dialect then your package doesn’t have to use that same dialect (you can still build that dependency and code your package in non-Neo Haskell). cabal can handle that sort of thing transparently.

Haskell does need better product management

I think the Haskell foundation was supposed to be this (I could be wrong) but that didn’t really seem to pan out.

Either way, I think a lot of us know what good product management is and it is strikingly absent from the ecosystem.

Negative feedback

Benevolent dictator

I think it’s ridiculous that someone who hasn’t made significant contributions to the Haskell ecosystem wants to become a benevolent dictator for a project aspiring to make an outsized impact on the Haskell ecosystem. I know that this is harsh and a personal attack on Nick and I’m also mindful that there’s a real person behind the avatar. HOWEVER, when you propose to be a benevolent dictator you are inherently making things personal. A proposal to become a benevolent dictator is essentially a referendum on you as a person.1

And it’s not just a matter of fairness or whatever. Nick’s lack of Haskell credentials directly impact his ability to actually meaningfully improve upon prior art if he doesn’t understand the current state of the art. Like, when Michael Snoyman created stack it did lead to a lot of fragmentation in the Haskell tooling but at least I felt like he was justified in his attempt because he had an impressive track record and a deep understanding of the Haskell ecosystem and toolchain.

I do not get anything remotely resembling that impression from Nick Seagull. He strikes me as a dilettante in this area and not just due to his lack of Haskell credentials but also due to some of his questionable proposed changes. This brings me to:

Unwelcome contributions

Not all contributions benefit the ecosystem2. I think proposing a new neo build tool is likely to fragment the tooling in a way similar to stack. I have worked pretty extensively with all three of cabal, stack and Nix throughout my career and my intuition based on that experience is that the only improvement to the Haskell command-line experience that is viable and that will “win� in the long run is one that is directly upstreamed into cabal. It’s just that nobody wants to do that because it’s not as glamorous as writing your own build tool.

Similarly, I think his proposed vision of “event source all the Haskell applications� (including command-line scripts) is poorly thought out. I firmly subscribe to the principle of least power which says that you should use the simplest type or abstraction available that gets the job done instead of trying to shoehorn everything into the same “god type� or “god abstraction�. I learned this the hard way when I tried to shoehorn everything into my pipes package and realized that it was a huge mistake, so it’s not like I’m innocent in this regard. Don’t make the same mistake I did.

And it matters that some of these proposed changes are counterproductive because if he indeed plays a role as a benevolent dictator you’re not going to get to pick and choose which changes to keep and which changes to ignore. You’re getting the whole package, like it or not.

Not good product management

I don’t believe NeoHaskell is the good product management we’re all looking for. “Haskell dialect + python interop + event sourcing + mobile backend� is not a product. It’s an odd bundle of features that don’t have a clear market or vertical or use case to constrain the design and navigate tradeoffs. The NeoHaskell roadmap comes across to me as a grab bag of unrelated features which individually sound good but that is not necessarily good product management.

To make this concrete: what is the purpose of bundling both python interop and a mobile backend into NeoHaskell’s roadmap? As far as I know there is no product vertical that requires both of those things.

The overall vibe is bad

My initial impression of NeoHaskell was that it struck me as bullshit. Carefully note that I’m not saying that Nick is a bullshitter, but if he wants to be taken seriously then he needs to rethink how he presents his ideas. Everything from the tone of the announcement post (including the irrelevant AI-generated images), the complete absence of any supporting code or mockups, and the wishy washy statement of purpose all contributed to the non-serious vibes.

Conclusion

Anyway, I don’t hate Nick and I’m pretty sure I’d get along with him great in person in other contexts. He also seems like a decently accomplished guy in other respects. However, I think nominating himself as a benevolent dictator for an ambitious ecosystem is a bit irresponsible. However, we all make mistakes and can learn from them.

And I don’t endorse NeoHaskell. I don’t think it’s any more likely to succeed than Haskell absent some better product management. “I like simple Haskell tailored to blue collar engineers� is a nice vibe but it’s not a product.

by Gabriella Gonzalez (noreply@blogger.com) at October 02, 2023 05:36 PM

Mark Jason Dominus

Irish logarithm forward instead of backward

Yesterday I posted about the so-called “Irish logarithm”, Percy Ludgate's little algorithm for single-digit multiplication.

Hacker News user sksksfpuzhpx said:

There's a much simpler way to derive Ludgate's logarithms

and referred to Brian Coghlan's aticle “Percy Ludgate's Logarithmic indices”.

Whereas I was reverse-engineering Ludgate's tables with a sort of ad-hoc backtracking search, if you do it right you can do it it more easily with a simple greedy search.

Uh oh, I thought, I will want to write this up before I move on to the thing I planned to do next, which made it all the more likely that I never would get to the thing I had planned to do next. But Shreevatsa R. came to my rescue and wrote up the Coghlan thing at least as well as I could have myself. Definitely check it out.

Thank you, Shreevatsa!

by Mark Dominus (mjd@plover.com) at October 02, 2023 03:33 PM

Well-Typed.Com

Improving GHC's configuration logic and cross-compilation support with ghc-toolchain

Rodrigo worked on an internship with the GHC team at Well-Typed this summer. In this post he reports on his progress improving GHC’s configuration tooling as a step towards better cross-compiler support.

GHC, like most high-level language compilers, depends upon a set of tools like assemblers, linkers, and archivers for the production of machine code. Collectively these tools are known as a toolchain and capture a great deal of platform-dependent knowledge.

Traditionally, developers generate a ./configure script using the venerable autoconf tool, then users execute this script when they install a GHC binary distribution. The ./configure script determines the location of programs (such as the C compiler) and which options GHC will need to pass them.

While this autoconf-centric model of toolchain configuration has served GHC well, it has two key issues:

  • For cross-compiling to a different platform, it would be highly valuable to users if GHC would become a runtime-retargetable compiler (like rustc and go). That is, the user should be able to download a single GHC binary distribution and use it to compile not only for their local machine, but also any other targets that GHC supports.

  • The ten-thousand-line sh file that is GHC’s ./configure script has historically been challenging to maintain and test. Modifications to the ./configure script are among the most risky changes routinely made to the compiler, because it is easy to introduce a bug on some specific toolchain configuration, and infeasible to test all possible configurations in CI.

To address these issues, we are introducing ghc-toolchain, a new way to configure the toolchain for GHC, which will eventually replace the existing toolchain configuration logic in the ./configure script. Its main goal is to allow new compilation toolchains to be configured for GHC at any point in time, notably after the compiler has been installed. For example, calling ghc-toolchain --triple=x86_64-w64-mingw32 will configure a compilation toolchain on the host machine capable of producing code for an x86_64 machine running Windows using MinGW. This is an important step towards making GHC runtime-retargetable, and since ghc-toolchain is implemented in Haskell, it will be much easier to modify and test than the ./configure script.

In this post we explain in more detail how GHC interacts with the system toolchain and how ghc-toolchain facilitates our future goal of making GHC a runtime-retargetable compiler.

Compiler Toolchains

GHC cannot produce executables from Haskell programs in isolation – it requires a correctly configured toolchain to which it can delegate some responsibilities. For example, GHC’s native code generator backend is capable of generating assembly code from a Haskell program, however, producing object code from that assembly, and linking the objects into an executable, are all tasks done by the compilation toolchain, which is invoked by GHC using the flags that were configured for it.

Configuring a compiler toolchain is about locating the set of tools required for compilation, the base set of flags required to invoke each tool, and properties of these tools. For example, this might include:

  • determining various characteristics of the platform (e.g. the word size).
  • probing to find which tools are available (the C compiler, linker, archiver, object merging tool, etc.),
  • identifying which flags GHC should pass to each of these tools,
  • determining whether the tools support response files to work around command-line length limits, and
  • checking for and working around bugs in the toolchain.

At the moment, when a GHC binary distribution is installed, the ./configure script will perform the above steps and store the results in a settings file. GHC will then read this file so it can correctly invoke the toolchain programs when compiling Haskell executables.

To cross-compile a Haskell program, a user must build GHC from source as a cross-compiler (see the GHC wiki). This requires configuring a cross-compilation toolchain, that is, a toolchain that runs on the machine compiling the Haskell program but that produces executables to run on a different system. It is currently a rather involved process.

The runtime-retargetable future of GHC

A key long-term goal of this work is to allow GHC to become runtime-retargetable. This means being able to call ghc --target=aarch64-apple-darwin and have GHC output code for an AArch64 machine, or call ghc --target=javascript-ghcjs to generate Javascript code, regardless of the platform ghc is being invoked on.

Crucially, this requires the configuration step to be repeated at a later point, rather than only when the GHC binary distribution is installed. Once GHC is fully runtime-retargetable, this will allow you to use multiple different toolchains, potentially targeting different platforms, with the same installed compiler.

  • At the simplest level, you might just have two different toolchains for your host platform (for example, a gcc-based toolchain and a clang-based toolchain), or you might just configure a toolchain which uses the new mold linker rather than ld.gold.

  • In a more complex scenario, you may have a normal compiler toolchain as well as several different cross-compiler toolchains. For example, a toolchain which produces Javascript, a toolchain which produces WebAssembly, a toolchain which produces AArch64 object code and so on.

The idea is that the brand new ghc-toolchain will be called once to configure the toolchain that GHC will use when compiling for a target, then ghc --target=<triple> can be called as many times as needed. For example, if you have an x86 Linux machine and wish to produce code for AArch64 devices, the workflow could look something like:

# Configure the aarch64-apple-darwin target first
# (We only need to do this once!)
ghc-toolchain --triple=aarch64-apple-darwin --cc-opt="-I/some/include/dir" --cc-linker-opt="-L/some/library/dir"

# Now we can target aarch64-apple-darwin (as many times as we'd like!)
ghc --target=aarch64-apple-darwin -o MyAwesomeTool MyAwesomeTool.hs
ghc --target=aarch64-apple-darwin -o CoolProgram CoolProgram.hs

Introducing ghc-toolchain

ghc-toolchain is a standalone tool for configuring a toolchain. It receives as input a target triplet (e.g. x86_64-deb10-linux) and user options, discovers the configuration, and outputs a “target description” (.target file) containing the configured toolchain.

At the moment, .target files generated by ghc-toolchain can be used by GHC’s build system (Hadrian) by invoking ./configure with --enable-ghc-toolchain. Otherwise, Hadrian reads the configuration from a .target file generated by ./configure itself.

In the future, ghc-toolchain will be shipped in binary distributions to allow new toolchains to be added after the compiler is installed (generating new .target files). GHC will then be able to choose the .target file for the particular target requested by the user.

From a developer standpoint, ghc-toolchain being written in Haskell makes it easier to modify in future, especially when compared to the notoriously difficult to write and debug ./configure scripts.

Migration to ghc-toolchain

We are migrating to ghc-toolchain in a staged manner, since toolchain configuration logic is amongst the most sensitive things to change in the compiler. We want to ensure that the configuration logic in ghc-toolchain is correct and agrees with the logic in ./configure. Therefore, in GHC 9.10 ghc-toolchain will be shipped and validated but not enabled by default.

To validate ghc-toolchain, GHC will generate .target files with both ./configure and ghc-toolchain and compare the outputs against each other, emitting a warning if they differ. This means we will be able to catch mistakes in ghc-toolchain (and in ./configure too!) before we make ghc-toolchain the default method for configuring toolchains in a subsequent release. This mechanism has already identified plenty of issues to resolve.

Future work

Despite ghc-toolchain bringing us closer to a runtime-retargetable GHC, there is still much work left to be done (see #11470). The next step is to instruct GHC to choose between multiple available .target files at runtime, instead of reading the usual settings file (tracked in #23682).

Beyond that, however, there are many open questions still to resolve:

  • How will the runtime system, and core libraries such as base, be provided for the multiple selected targets?
  • How will this fit into ghcup’s installation story?
  • How will cabal handle multiple targets?

At the moment, binary distributions include the RTS/libraries already compiled for a single target. Instead, we are likely to need some mechanism for users to recompile the RTS/libraries when they configure a new target, or to download ready-built versions from upstream.

Moreover, accommodating TemplateHaskell under runtime retargetability is particularly nontrivial, and needs more design work.

Conclusion

ghc-toolchain is a new tool for configuring toolchains and targets. It improves on GHC’s existing ./configure-based configuration workflow by allowing multiple targets’ toolchains to be configured at any time, and by making maintenance and future updates to the toolchain configuration logic much easier. However, toolchain configuration is a challenging part of the compiler, so we’re being conservative in migrating to ghc-toolchain, and carefully validating it before making it the default.

Moreover, ghc-toolchain is an important step towards making a runtime-retargetable GHC a reality, though there is still much work left to do. We are grateful to all the GHC developers involved in working towards runtime-retargetability.

Well-Typed is able to work on GHC, HLS, Cabal and other core Haskell infrastructure thanks to funding from various sponsors. If your company might be able to contribute to this work, sponsor maintenance efforts, or fund the implementation of other features, please read about how you can help or get in touch.

by rodrigo at October 02, 2023 12:00 AM

October 01, 2023

Mark Jason Dominus

The Irish logarithm

The Wikipedia article on “Irish logarithm” presents this rather weird little algorithm, invented by Percy Ludgate. Suppose you want to multiply and , where both are single-digit numbers .

Normally you would just look it up on a multiplication table, but please bear with me for a bit.

To use Ludgate's algorithm you need a different little table:

$$ \begin{array}{rl} T_1 = & \begin{array}{cccccccccc} \tiny\color{gray}{0} & \tiny\color{gray}{1} & \tiny\color{gray}{2} & \tiny\color{gray}{3} & \tiny\color{gray}{4} & \tiny\color{gray}{5} & \tiny\color{gray}{6} & \tiny\color{gray}{7} & \tiny\color{gray}{8} & \tiny\color{gray}{9} \\ 50 & 0 & 1 & 7 & 2 & 23 & 8 & 33 & 3 & 14 \\ \end{array} \end{array} $$

and a different bigger one:

$$ \begin{array}{rl} T_2 = & % \left( \begin{array}{rrrrrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, & 3, & 6, & 12, \\ {\tiny\color{gray}{10}} & 24, & 48, & 0, & 0, & 9, & 18, & 36, & 72, & 0, & 0, \\ {\tiny\color{gray}{20}} & 0, & 27, & 54, & 5, & 10, & 20, & 40, & 0, & 81, & 0, \\ {\tiny\color{gray}{30}} & 15, & 30, & 0, & 7, & 14, & 28, & 56, & 45, & 0, & 0, \\ {\tiny\color{gray}{40}} & 21, & 42, & 0, & 0, & 0, & 0, & 25, & 63, & 0, & 0, \\ {\tiny\color{gray}{50}} & 0, & 0, & 0, & 0, & 0, & 0, & 35, & 0, & 0, & 0, \\ {\tiny\color{gray}{60}} & 0, & 0, & 0, & 0, & 0, & 0, & 49\hphantom{,} \end{array} % \right) \end{array} $$

I've formatted in rows for easier reading, but it's really just a zero-indexed list of numbers. So for example is .

The tiny gray numbers in the margin are not part of the table, they are counting the elements so that it is easy to find element .

Ludgate's algorithm is simply:

$$ ab = T_2(T_1(a) + T_1(b)) $$

Let's see an example. Say we want to multiply . We first look up and in , and get and , which we add, getting . Then is , which is the correct answer.

This isn't useful for paper-and-pencil calculation, because it only works for products up to , and an ordinary multiplication table is easier to use and remember. But Ludgate invented this for use in a mechanical computing engine, for which it is much better-suited.

The table lookups are mechanically very easy. They are simple one-dimensional lookups: to find you just look at entry in the table, which can be implemented as a series of ten metal rods of different lengths, or something like that. Looking things up in a multiplication table is harder because it is two-dimensional.

The single addition in Ludgate's algorithm can also be performed mechanically: to add and , you have some thingy that slides up by units, and then by more, and then wherever it ends up is used to index into to get the answer. The table doesn't have to be calculated on the fly, it can be made up ahead of time, and machined from brass or something, and incorporated directly into the machine. (It's tempting to say “hardcoded”.)

The tables look a little uncouth at first but it is not hard to figure out what is going on. First off, is the inverse of in the sense that $$T_2(T_1(n)) = n\tag{$\color{darkgreen}{\spadesuit}$}$$

whenever is in range — that is when .

is more complex. We must construct it so that

$$T_2(T_1(a) + T_1(b)) = ab.\tag{$\color{purple}{\clubsuit}$}$$

for all and of interest, which means that .

If you look over the table you should see that the entry is often followed by . That is, , at least some of the time. In fact, this is true in all the cases we care about, where for some single digits .

The second row could just as well have started with , but Ludgate doesn't need the entries, so he made them zero, which really means “I don't care”. This will be important later.

The algorithm says that if we want to compute , we should compute $$ \begin{align} 2n & = T_2(T_1(2) + T_1(n)) && \text{Because $\color{purple}{\clubsuit}$} \\ & = T_2(1 + T_1(n)) \\ & = 2T_2(T_1(n)) && \text{Because moving one space right doubles the value}\\ & = 2n && \text{Because $\color{darkgreen}{\spadesuit}$} \end{align} $$

when .

I formatted in rows of because that makes it easy to look up examples like , and because that's how Wikipedia did it. But this is very misleading, and not just because it makes appear to be a table when it's really a vector. is actually more like a compressed version of a table.

Let's reformat the table so that the rows have length instead of :

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 5, & 10, & 20, & 40, & 0, \\ {\tiny\color{gray}{28}} & 81, & 0, & 15, & 30, & 0, & 7, & 14, \\ {\tiny\color{gray}{35}} & 28, & 56, & 45, & 0, & 0, & 21, & 42, \\ {\tiny\color{gray}{42}} & 0, & 0, & 0, & 0, & 25, & 63, & 0, \\ {\tiny\color{gray}{49}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{56}} & 35, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{63}} & 0, & 0, & 0, & 49 \\ \end{array} $$

We have already seen that moving one column right usually multiplies the entry by . Similarly, moving down by one row is seen to triple the value — not always, but in all the cases of interest. Since the rows have length , moving down one row from gets you to , and this is why : to compute , one does:

$$ \begin{align} 3n & = T_2(T_1(3) + T_1(n)) && \text{Because $\color{purple}{\clubsuit}$} \\ & = T_2(7 + T_1(n)) \\ & = 3T_2(T_1(n)) && \text{Because moving down triples the value}\\ & = 3n && \text{Because $\color{darkgreen}{\spadesuit}$} \end{align} $$

Now here is where it gets clever. It would be straightforward easy to build as a stack of tables, with each layer in the stack having entries quintuple the layer above, like this:

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{28}} & 81, & 0, & 0, & 0, & 0, & 0, & 0, \\ \\ {\tiny\color{gray}{35}} & 5, & 10, & 20, & 40, & 0, & 0, & 0, \\ {\tiny\color{gray}{42}} & 15, & 30, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{49}} & 45, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{56}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{63}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ \\ {\tiny\color{gray}{70}} & 25, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{77}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{84}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{91}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{98}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ \end{array} $$

This works, if we make the correct offset, which is . But it wastes space, and the larger is, the more complicated and expensive is the brass thingy that encodes it. The last six entries of the each layer in the stack are don't-cares, so we can just omit them:

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{28}} & 81, \\ \\ {\tiny\color{gray}{29}} & 5, & 10, & 20, & 40, & 0, & 0, & 0, \\ {\tiny\color{gray}{36}} & 15, & 30, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{43}} & 45, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{50}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{57}} & 0, \\ \\ {\tiny\color{gray}{58}} & 25, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{65}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{72}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{79}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{86}} & 0\hphantom{,} \\ \end{array} $$

And to compensate we make instead of : you now move down one layer in the stack by skipping entries forward, instead of .

The table is still missing all the multiples of , but we can repeat the process. The previous version of can now be thought of as a table, and we can stack another table below it, with all the entries in the new layer being times the original one:

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{28}} & 81, \\ \\ {\tiny\color{gray}{29}} & 5, & 10, & 20, & 40, & 0, & 0, & 0, \\ {\tiny\color{gray}{36}} & 15, & 30, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{43}} & 45, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{50}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{57}} & 0, \\ \\ {\tiny\color{gray}{58}} & 25, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{65}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{72}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{79}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{86}} & 0, \\ \\ \hline \\ {\tiny\color{gray}{87}} & 7, & 14, & 28, & 56, & 0, & 0, & 0, \\ {\tiny\color{gray}{94}} & 21, & 42, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{101}} & 63, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{108}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{115}} & 0, \\ \\ {\tiny\color{gray}{116}} & 35, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{123}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{130}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{137}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{144}} & 0, \\ \\ {\tiny\color{gray}{145}} & 0, & 0, & 0, & 0, & 0, & 0, & \ldots \\ \\ \hline \\ {\tiny\color{gray}{174}} & 49\hphantom{,} \\ \end{array} $$

Each layer in the stack has entries, so we could take and it would work, but the last entries in every layer are zero, so we can discard those and reduce the layers to entries each.

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{28}} & 81, \\ \\ {\tiny\color{gray}{29}} & 5, & 10, & 20, & 40, & 0, & 0, & 0, \\ {\tiny\color{gray}{36}} & 15, & 30, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{43}} & 45, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{50}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{57}} & 0, \\ \\ {\tiny\color{gray}{58}} & 25, \\ \\ \hline \\ {\tiny\color{gray}{59}} & 7, & 14, & 28, & 56, & 0, & 0, & 0, \\ {\tiny\color{gray}{66}} & 21, & 42, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{73}} & 63, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{80}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{87}} & 0, \\ \\ {\tiny\color{gray}{88}} & 35, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{95}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{102}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{109}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{116}} & 0, \\ \\ {\tiny\color{gray}{117}} & 0, \\ \\ \hline \\ {\tiny\color{gray}{118}} & 49\hphantom{,} \\ \end{array} $$

Doing this has reduced the layers from to elements each, but Ludgate has another trick up his sleeve. The last few numbers in the top layer are and a lot of zeroes. If he could somehow finesse and , he could trim the top two layers all the way back to only 38 entries each:

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{28}} & 81, \\ \\ {\tiny\color{gray}{29}} & 5, & 10, & 20, & 40, & 80, & 0, & 0, \\ {\tiny\color{gray}{36}} & 15, & 30, \\ \\ \hline \\ {\tiny\color{gray}{38}} & 7, & 14, & 28, & 56, & 0, & 0, & 0, \\ {\tiny\color{gray}{45}} & 21, & 42, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{52}} & 63, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{59}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{66}} & 0, \\ \\ {\tiny\color{gray}{67}} & 35, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{74}} & 0, & 0, \\ \hline \\ {\tiny\color{gray}{76}} & 49\hphantom{,} \\ \end{array} $$

We're now missing and we need to put it back. Fortunately the place we want to put it is , and that slot contains a zero anyway. And similarly we want to put at position , also empty:

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{28}} & 81, \\ \\ {\tiny\color{gray}{29}} & 5, & 10, & 20, & 40, & 0, & 0, & 0, \\ {\tiny\color{gray}{36}} & 15, & 30, \\ \\ \hline \\ {\tiny\color{gray}{38}} & 7, & 14, & 28, & 56, & 0, & \color{purple}{45}, & 0, \\ {\tiny\color{gray}{45}} & 21, & 42, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{52}} & 63, & 0, & 0, & 0, & 0, & 0, & \color{purple}{25}, \\ {\tiny\color{gray}{59}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{66}} & 0, \\ \\ {\tiny\color{gray}{67}} & 35, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{74}} & 0, & 0, \\ \\ \hline \\ {\tiny\color{gray}{76}} & 49\hphantom{,} \\ \end{array} $$

The arithmetic pattern is no longer as obvious, but property still holds

We're not done yet! The table still has a lot of zeroes we can squeeze out. If we change from to , the group will slide backward to just after the , and the will move to the row below that.

We will also have to move the other multiples of . The itself moved back by six entries, and so did everything after that in the table, including the (from position to ) and the (from position to ) so those are still in the right places. Note that this means that has moved from position to position , so we now have .

But the is giving us trouble. It needed to move back twice as far as the others, from to , and unfortunately it now collides with which is currently at position .

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & \color{purple}{5}, & \color{purple}{10}, & \color{purple}{20}, & \color{purple}{40}, & 0, \\ {\tiny\color{gray}{28}} & 81, & 0, & \color{purple}{15} & \color{purple}{30}, \\ \\ \hline \\ {\tiny\color{gray}{32}} & 7, & 14, & 28, & 56, & 0, & \color{darkgreen}{45}, & 0, \\ {\tiny\color{gray}{39}} & 21, & 42, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{46}} & {63\atop\color{darkred}{¿25?}} & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{53}} & 0, & 0, & \color{darkgreen}{35}, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{60}} & 0, & 0, & 0, & 0, \\ \\ \hline \\ {\tiny\color{gray}{64}} & 49\hphantom{,} \\ \end{array} $$

We need another tweak to fix . is currently at position . We can't move any farther back to the left without causing more collisions. But we can move it forward, and if we move it forward by one space, the will move up one space also and the collision with will be solved. So we insert a zero between and , which moves up from position to :

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 5, & 10, & 20, & 40, & 0, \\ {\tiny\color{gray}{28}} & 81, & 0, & 15 & 30, \\ \\ \hline \\ {\tiny\color{gray}{32}} & \color{purple}{0}, & \color{darkgreen}{7}, & \color{darkgreen}{14}, & \color{darkgreen}{28}, & \color{darkgreen}{56}, & 45, & 0, \\ {\tiny\color{gray}{39}} & 0, & \color{darkgreen}{21}, & \color{darkgreen}{42}, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{46}} & 25,& \color{darkgreen}{63}, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{53}} & 0, & 0, & 0, & \color{darkgreen}{35}, & 0, & 0, & 0, \\ {\tiny\color{gray}{60}} & 0, & 0, & 0, & 0, \\ \\ \hline \\ {\tiny\color{gray}{64}} & \color{purple}{0}, & \color{purple}{0}, & \color{darkgreen}{49}\hphantom{,} \\ \end{array} $$

All the other multiples of moved up by one space, but not the non-multiples and . Also had to move up by two, but that's no problem at all, since it was at the end of the table and has all the space it needs.

And now we are done! This is exactly Ludgate's table, which has the property that

$$T_2(p + 7q + 23r + 33s) = 2^p3^q5^r7^s$$

whenever for some . Moving right by one space multiplies the entry by , at least for the entries we care about. Moving right by seven spaces multiplies the entry by . To multiply by or we move right by or or by , respectively.

These are exactly the values in the table:

$$\begin{align} T_1(2) & = 1\\ T_1(3) & = 7\\ T_1(5) & = 23\\ T_1(7) & = 33 \end{align}$$

The rest of the table can be obtained by remembering , that , so for example because . Or we can get by multiplication, using : multiplying by is the same as multiplying by and then by , which means you move right by and then by , for a total of . Here's again for reference:

$$ \begin{array}{rl} T_1 = & \begin{array}{cccccccccc} 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline 50 & 0 & 1 & 7 & 2 & 23 & 8 & 33 & 3 & 14 \\ \end{array} \end{array} $$

(Actually I left out a detail: . Ludgate wants for all . So we need for each in . is the smallest value that works. This is rather painful, because it means that the -item table aboveis not sufficient. Ludgate has to extend all the way out to items in order to handle the seemingly trivial case of . But the last 35 entries are all zeroes, so the the brass widget probably doesn't have to be too much more complicated.)

Wasn't that fun? A sort of mathematical engineering or a kind that has not been really useful for at least fifty years.

But actually that was not what I planned to write about! (Did you guess that was coming?) I thought I was going to write this bit as a brief introduction to something else, but the brief introduction turned out to be 2500 words and a dozen complicated tables.

We can only hope that part 2 is forthcoming. I promise nothing.

[ Update 20231002: Rather than the ad-hoc backtracking approach I described here, one can construct and in a simpler and more direct way. Shreevatsa R. explains. ]

by Mark Dominus (mjd@plover.com) at October 01, 2023 08:30 AM

Magnus Therning

How I use Emacs

I've recently written two posts about my attempts to use a slimmed down Emacs setup for some very specific use cases. I'be put both posts on Reddit, here and here, and in both cases the majority of comments have been telling me that I should use emacsclient. I know they have good intentions – they want to share insight they've gained and benefit from on a daily basis. However, no matter how many Emacs devotees point out the benefits of emacsclient I'm not about to start using it. This post is an attempt to answer why that is.

Up front I want to clarify a few things:

  1. Yes, I know how to use emacsclient, and
  2. yes, I know it is a good way to, in a way, improve Emacs startup time,1 and
  3. yes, I know how to turn on server-mode in Emacs, and finally
  4. yes, I know how to run Emacs using a user unit for SystemD.

With that out of the way, here are the two ways I use Emacs

  1. As my starting point for work, i.e. writing code, keeping notes, tracking time, and writing my daily work journal.
  2. As my editor of ephemeral files.2

The next two sections explain more about these two distinct ways I use Emacs

As my starting point for work

Number of packages 162
Init time (emacs-init-time) 1.883483s
Config size (by du -bch) 68K

Much of what I do on a daily basis starts in Emacs. I typically have one instance of Emacs open and I always keep it on the second virtual desktop. I always run it in the GUI. When I write code I start with opening a new tab, then I open a dired buffer in the project's folder (by using consult-projectile). When I need a terminal I open it from Emacs using on of terminal-here-project-launch or terminal-here. Occasionally I open a shell prompt inside Emacs using shell-pop.

Back when I used Vim as my main editor I always started a terminal first and then opened files from there. Since switching to Emacs I've completely stopped doing that. Over the last 8 or so years of Emacs usage there's only been a handful of times when I've wanted to open a file from the terminal and I've run M-x server-start and used emacsclient. The last time was more than a year ago.

As I typically keep exactly one Emacs open, and I start it soon after logging in, I'm not too concerned with startup time. I think under 2s is more than fast enough given the functionality I have in my setup.

This setup I use for taking notes and writing my daily work journal, as well as reading email, and programming in a half-dozen languages. I have a large-ish set of keybindings that I've set up using general.el, inspired by Spacemacs at first but by now it's started to gain its own character.

As my editor of ephemeral files ($EDITOR)

Number of packages 22
Init time (emacs-init-time) 0.209298s
Config size (by du -bch) 6.7K

Ephemeral files are files I tend to edit for less than 30 seconds, maybe a minute at most. There are three main use cases for ephemeral files:

  1. Searching the scrollback buffer in zellij, and copying bits to the clipboard for various uses.
  2. Editing files when running git from the command line. It's not something I do very often, but it happens.
  3. Editing shell commands. When they get a little too large to handle conveniently using ZSH directly I invoke edit-command-line.

For a few reasons I decided to make a second, completely separate configuration just to handle ephemeral files.

  • It will only be used in a terminal.
  • I want to be able to have some special keybindings that suit a specific use, e.g. for the scrollback buffer I've bound `SPC Y` to copy the selected text to the clipboard and then exit Emacs. It's a thing that I use all the time with the scrollback buffer, but never otherwise.
  • I have no need, nor any desire, to switch from editing a commit message, or searching a scrollback buffer, to reading email or editing an org-mode file. The complete separation is a feature.

With a startup time of less than a quarter of a second it is well within the acceptable, and there is absolutely no need to use emacsclient just to speed things up. Given my desire for separation, I wouldn't want to use my main Emacs instance as a server and edit ephemeral files anyway.

Conclusion

I've found a setup that seems to work really well and tick all the requirements I have when it comes to separation between use cases and ability to have custom keybindings for them. Also, Emacs is starting up very fast with my slimmed down configuration. If starting Emacs with the slimmed configuration starts taking too long I'm more likely to go back to using Neovim than complicate things with emacsclient.

So no, I am not going to start using emacsclient any time soon.

Footnotes:

1

I write "in a way" as it actually does nothing for Emacs startup time, it just shifts it to a point in time so you don't have to sit and wait for it to start.

2

I used to use Neovim, without any config, for most of this until recently.

October 01, 2023 05:13 AM

September 30, 2023

Matthew Sackman

Using rsync for backups

Blimey, almost exactly a year since my last post. I guess that’s what happens when you have a full-time job at a start-up.

I’ve run my own servers for 15 years or so. Regardless of the technologies used, servers have mutable state, and that needs backing up, ideally regularly. Whenever I have to do a lot of sysadmin work, e.g. major OS upgrades, or changing hosting provider, I always look at what backup services are provided, but they’re never super great. Even if they are offsite, accessing them can be tricky, and it seems unwise to trust that they would be available if the hosting company goes under. So I’ve basically just winged it, for roughly 15 years. And it’s been fine.

A few stories in the news recently, and having a spare weekend, made me consider whether maybe I should finally try to improve this situation. I have a machine here at home which would be a suitable backup destination. What follows here is more or less what I’ve done. From a security point-of-view, if anyone gets into any of my servers, or my home machines, it’s game-over, so provided nothing I’ve done here weakens the security of any of those machines, then that’s good enough for me.

Creating a backup needs to occur as root on the servers: it needs to access all sorts of files and directories all over the machine, regardless of their user. The question to answer is: should the backup be initiated on the backup machine, and pull data from the server; or should it be initiated on the server, and push data to the backup machine. I don’t want to add setuid binaries to the servers, and I don’t want to allow root to log-in via ssh on the servers. So really that means the backup must be initiated on the server, and push data out.

I have WireGuard connections between the backup machine and my servers, so connectivity is simple: there’s nothing fancy to do to get the servers to be able to connect to my backup machine.

So, initial setup:

  1. On the backup machine, create a new user, backup.
  2. Make sure root on the server has an ssh key-pair (ssh-keygen -t ed25519 etc).
  3. Make sure the public key side of that is in the authorized_keys for the backup user on the backup machine.

Authorized keys

A while ago, I learnt that you can put a lot of extra stuff in ~/.ssh/authorized_keys (man sshd). So for the backup user on the backup machine, the ~/.ssh/authorized_keys file looks rather like this:

command="/home/backup/bin/checkssh.sh",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-ed25519 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA comment

This ensures that the server cannot request various network forwarding, nor a terminal. Yes, earlier I said I’m not overly concerned with security provided I don’t make things worse, so these restrictions are something of a belt-and-braces approach. At the end of the day, if the server is compromised, then there’s probably nothing I can do to guarantee the integrity of the backups either. Now because I’m going to be using rsync, I want to stop the server from being able to overwrite ~backup/.ssh/authorized_keys, as that would also be bad. So on the backup machine:

chown -R root:backup ~backup/.ssh
chmod u=rx,g=rx,o= ~backup/.ssh
chmod u=r,g=r,o= ~backup/.ssh/authorized_keys

But, going back to the authorized_keys line, what’s that command="/home/backup/bin/checkssh.sh" bit? Well, that file started out as this:

#!/bin/bash
if [ -n "$SSH_ORIGINAL_COMMAND" ]; then
    if [[ "$SSH_ORIGINAL_COMMAND" =~ ^rsync\  ]]; then
        echo "`/bin/date`: $SSH_ORIGINAL_COMMAND" >> $HOME/ssh-command-log
        exec $SSH_ORIGINAL_COMMAND
    else
        echo "`/bin/date`: DENIED $SSH_ORIGINAL_COMMAND" >> $HOME/ssh-command-log
    fi
fi

As the man-page says, command="command" specifies that the command is executed whenever this key is used for authentication. And also, the command originally supplied by the client is available in the SSH_ORIGINAL_COMMAND environment variable. So, this script is always run whenever the key is used for authentication, and we can check that the original command supplied is rsync. That’s a command that the server is requesting be run on the backup machine. I.e. it’s a binary under the control of the backup machine, it needs to be in the PATH etc.

With all of this in place, from the server:

  • trying to do a plain ssh backup@backup.wireguard.local should fail (due to no terminal being allowed).
  • doing something like ssh backup@backup.wireguard.local ls should fail silently because ls is not rsync (though we should have an entry in the ~backup/ssh-command-log file now saying DENIED ls).
  • but ssh backup@backup.wireguard.local rsync -h should work - you should get the help text from rsync back.

Again, I want to make sure rsync can’t overwrite the checkssh.sh script, so:

chown -R root:backup ~backup/bin
chmod u=rx,g=rx,o= ~backup/bin
chmod u=r,g=rx,o= ~backup/bin/checkssh.sh

Running rsync

With all that done, from the server I should be able to do:

rsync -az --stats /important/data/ backup@backup.wireguard.local:/home/backup/server/important/data/

It does work. However, because on the backup machine, rsync isn’t running as root (it’s running as our backup user), all the ownership data of all the files and directories gets lost. This is rubbish. I want to keep using rsync, because of its ability to do incremental backups and general efficiency. But I don’t want to lose file ownership or permissions. It’s just data after all - it can’t be that hard to get it kept! The problem is that rsync is using the normal file-system on the backup machine, and its ability to set arbitrary owners and groups is limited as a consequence of running as the backup user.

So I started thinking about can I get rsync to write into a loop-back device? Something where the host OS isn’t going to interfere? After a bit of searching, it turns out loop-back is wrong, but a user namespace is right.

Namespaces, and unshare

These days, because of the rise and rise of containers, Linux supports a lot of different namespaces. E.g. from within one pid namespace, you can’t see the processes of another pid namespace. Similarly, the users and groups of one user namespace are isolated from the users and groups of a different user namespace. As a normal, unprivileged user, I can create (and enter) a new user namespace. I can even be root in that new user namespace! Thankfully, the root in this new namespace is very different to the root in the actual host, so I can’t do terrifying damage. But it is enough to be able to set arbitrary users, groups and permissions on files that I create within this namespace.

Let’s play around with this a bit.

backup@~/> touch foo
backup@~/> ls -l foo
-rw------- 1 backup backup 0 Sep 30 10:16 foo
backup@~/> unshare --user --map-auto --map-root-user
root@~/> ls -l foo
-rw------- 1 root root 0 Sep 30 10:16 foo
root@~/> touch bar
root@~/> ls -l bar
-rw------- 1 root root 0 Sep 30 10:17 foo
root@~/> ls -l /root
ls: cannot open directory '/root': Permission denied
root@~/> exit
backup@~/> ls -l foo bar
-rw------- 1 backup backup 0 Sep 30 10:16 foo
-rw------- 1 backup backup 0 Sep 30 10:17 bar

So, I created a file as the simple backup user. I used the unshare command to create and enter a new user namespace, and I became root within it. Files that I previously owned were still owned by me; but it seems like I am root! If I create new files as root, then when I exit the namespace, they’re owned by the normal backup user. Thankfully, even as this new root, it looks like I can’t access things that are restricted to the real host root, like /root!

Do I gain any new super-powers as this new root?

backup@~/> ls -l foo
-rw------- 1 backup backup 0 Sep 30 10:16 foo
backup@~/> chown 1111 foo
chown: changing ownership of 'foo': Operation not permitted
backup@~/> unshare --user --map-auto --map-root-user
root@~/> ls -l foo
-rw------- 1 root root 0 Sep 30 10:16 foo
root@~/> chown 1111 foo
root@~/> ls -l foo
-rw------- 1 1111 root 0 Sep 30 10:16 foo
root@~/> exit
backup@~/> ls -l foo
-rw------- 1 297718 backup 0 Sep 30 10:16 foo

Woah, yes I do! So the numeric 1111 user-id inside the namespace gets mapped to some other user-id outside of the namespace (you can configure this: see the man-pages for subuid and subgid). But yes, as root inside my new namespace, I have permission to use arbitrary user-ids!

If I go back in, it’s still all good – the mapping is stable and persistent:

backup@~/> ls -l foo
-rw------- 1 297718 backup 0 Sep 30 10:16 foo
backup@~/> unshare --user --map-auto --map-root-user
root@~/> ls -l foo
-rw------- 1 1111 root 0 Sep 30 10:16 foo
root@~/> exit

This is exactly what I need then. I just need to edit that checkssh.sh script so that the rsync command is run within its own user namespace:

#!/bin/bash
if [ -n "$SSH_ORIGINAL_COMMAND" ]; then
    if [[ "$SSH_ORIGINAL_COMMAND" =~ ^rsync\  ]]; then
        echo "`/bin/date`: $SSH_ORIGINAL_COMMAND" >> $HOME/ssh-command-log
        exec unshare --user --map-auto --map-root-user $SSH_ORIGINAL_COMMAND
    else
        echo "`/bin/date`: DENIED $SSH_ORIGINAL_COMMAND" >> $HOME/ssh-command-log
    fi
fi

So now, from the server, the full rsync command looks like:

rsync -az --stats --numeric-ids --delete --chmod=o= \
   -M --log-file="/home/backup/rsync-$(date --rfc-3339=ns | sed -e 's/ /_/g').log" \
   /important/data/ backup@backup.wireguard.local:/home/backup/server/important/data/

A few other options have appeared here; the man-page for rsync is your best guide, but in brief:

  • --delete: if a file exists on the backup machine that doesn’t on the server then delete it. This means that the backup should be a snapshot of the server (more or less).
  • --chmod=o=: no files on the backup machine should be accessible in any way to other. Yes, this is throwing away a little of the data I’ve just worked hard to preserve. But still, I generally don’t want any of these files globally accessible.
  • -M --log-file="/home/backup/rsync-$(date --rfc-3339=ns | sed -e 's/ /_/g').log" This creates a log file on the backup machine of what it did. Could be handy one day. Doesn’t exactly hurt.

All done: using good old rsync for its fairly efficient incremental backups, with the reasonably new namespace capabilities and unshare, to ensure ownership and permissions are not lost, without having to allow root-logins anywhere.

September 30, 2023 04:01 PM

Magnus Therning

Using Emacs as $EDITOR

Continuing on from my experiment with using Emacs as for scrollback in my terminal multiplexer I thought I'd try to use it as my $EDITOR as well.

The two main cases where I use $EDITOR is

  1. The occasional use of git on the command line, rebasing or writing a commit message, and
  2. Use of ZSH's edit-command-line functionality.

To make sure Emacs is starting up quickly enough I'm using the same small setup I created for the scrollback editing, so I'm now setting EDITOR like this

export EDITOR="emacs -nw --init-directory ~/.se.d"

Now that I want to use the same setup for editing I can't really jump into view-mode every time Emacs starts so I have to be a bit more clever. The following bit won't do

(add-hook 'find-file-hook #'view-mode)

I need to somehow find out what starts Emacs and then only modify the hook when needed. Unfortunately I haven't found anything that reveals that Emacs is started by zellij. Creating a separate little script that zellij uses would be an option, of course, but for now I've opted to make it the default and instead refrain from adding the hook in the other two use cases.

ZSH doesn't make it easy to find out that it's edit-command-line either, but as I've observed that the command line sometimes doesn't look right after leaving the editor I wanted to call redisplay to fix it up. That means I need to have a function anyway, so using an environment variable becomes an easy way to check if Emacs is being used to edit the command line.

function se-edit-command-line() {
    export SE_SKIP_VIEW=y
    zle edit-command-line
    unset SE_SKIP_VIEW
    zle redisplay
}
zle -N se-edit-command-line

bindkey -M vicmd '^V' se-edit-command-line
bindkey -M viins '^V' se-edit-command-line

Unfortunately is seems zle edit-command-line doesn't pass on non-exported environment variables, hence the explicit export and unset.

When git starts an editor it sets a few environment variables so it was easy to just pick one that is set in both cases I care about. I picked GIT_EXEC_PATH.

With these things in place I changed the slim setup to only add the hook when neither of the environment variables are present

(unless (or (getenv "SE_SKIP_VIEW")
            (getenv "GIT_EXEC_PATH"))
  (add-hook 'find-file-hook #'view-mode))

Hopefully this works out well enough that I won't feel a need to go back to using Neovim as my $EDITOR.

September 30, 2023 01:24 PM

Gil Mizrahi

Implementing kind inference

In previous articles we talked about how to write an implementation of a type inference algorithm. One that can infer the type of complex expressions without type annotation and can provide a validation layer on top of our code for no effort on the user's part.

What we talked about was for validating expressions and expression definitions. But what about type definitions? How can we help the user catch errors when the types they define are inconsistent or don't make sense?

We want to be able to catch errors such as:

Tree a =
  | Node a Tree Tree
    -- ^ should be: Node a (Tree a) (Tree a)

And:

Rec f a =
  | Rec f (f a)
    -- ^ f is used both as a saturated type and a type that takes a parameter

And more, while still allowing the user to define complex types without annotation, such as:

Cofree f a =
  | Cofree a (f (Cofree f a))

As we'll soon see, we can use the exact same unification-based constraint solving approach to type inference we covered in this article to infer the type of a type, or as we call them in the Haskell world, the kind of a type.

Getting started

�If you prefer to skip the explanations and jump straight to the code, click here.

In this article we will implement a kind inference engine in Haskell for a simple type system. We'll start by adding the relevant imports and language definitions for our Haskell module:

#!/usr/bin/env cabal
{- cabal:
build-depends: base, mtl, containers, uniplate
ghc-options: -Wall
-}

This first part lets us run this file as a script if we have ghc and cabal installed. Just chmod +x kinds.hs and run it.

We will use the GHC2021 set of extensions and LambdaCase, as well as a few additional modules that will come into play later.

-- | An example of a kind inference for data types using
-- unification-based constraint solving.
--
-- See the blog post:
-- <https://gilmi.me/blog/post/2023/09/30/kind-inference>

{-# Language GHC2021 #-}
{-# Language LambdaCase #-}

import Data.Data (Data)
import GHC.Generics (Generic)
import Data.Tuple (swap)
import Data.Maybe (listToMaybe)
import Data.Foldable (for_)
import Data.Traversable (for)
import Control.Monad (foldM)
import Control.Monad.State qualified as Mtl
import Control.Monad.Except qualified as Mtl
import Data.Generics.Uniplate.Data qualified as Uniplate (universe, transformBi)
import Data.Map qualified as Map

Models

Now we can start by defining our models. What are types? What do type definitions look like? What are kinds?

Let's start with a data type definition. We'll support ML style data definitions like the ones in Haskell. For example, the following data type:

Option a =
  | Some a
  | None

A data type definition starts with the type's name, its type parameters, and a list of variants where each has a constructor name and potentially several types.

We'll represent that using the following types:

-- | The representation of a data type definition.
data Datatype a
  = Datatype
    { -- | A place to put kind annotation in.
      dtAnn :: a
    , -- | The name of the data type.
      dtName :: TypeName
    , -- | Type parameters.
      dtParameters :: [TypeVar]
    , -- | Alternative variants.
      dtVariants :: [Variant a]
    }
  deriving (Show, Eq, Data, Generic, Functor, Foldable, Traversable)

-- | A Variant of a data type definition.
data Variant a
  = Variant
    { -- | A type constructor.
      vTypeConstructor :: String
    , -- | A list of types.
      vTypes :: [Type a]
    }
  deriving (Show, Eq, Data, Generic, Functor, Foldable, Traversable)

-- | A name of known types.
newtype TypeName = MkTypeName { getTypeName :: String }
  deriving (Show, Eq, Ord, Data, Generic)

-- | A type variable.
newtype TypeVar = MkTypeVar { getTypeVar :: String }
  deriving (Show, Eq, Ord, Data, Generic)

That polymorphic a is going to be used for our kind annotation.

The shape of the types that we are going to support in our type system are fairly simple. We support type names, such as Int and Option, type variables, such as a and t, and type application, which lets us apply higher kinded types such as Option with other types, such as Option Int, Either e a and f a.

-- | A representation of a type with a place for kind annotation.
data Type a
  = -- | A type variable.
    TypeVar a TypeVar
  | -- | A named type.
    TypeName a TypeName
  | -- | An application of two types, of the form `t1 t2`.
    TypeApp a (Type a) (Type a)
  deriving (Show, Eq, Data, Generic, Functor, Foldable, Traversable)

For example, the data type Option we defined earlier will be represented as a Datatype in the following way:

option =
  Datatype ()
    (MkTypeName "Option")
    [MkTypeVar "a"]
    [ Variant "Some" [TypeVar () $ MkTypeVar "a"]
    , Variant "None" []
    ]

And now lets talk about kinds. As we said before, kinds are the types of types. They represent their whether they can be applied with other types, what their arity should be, and kind of types can be placed in each slot.

For example Option has the kind Type -> Type, it can be applied with a type that has the kind Type, such as Int, but cannot be applied with Option, or with two Ints.

There are also scenarios where a type variable can have any kind, for example in the following data type:

Proxy t =
  | Proxy

Since t is not used anywhere, we can apply any type to Proxy. We can have Proxy Int, but also Proxy Option. Let's define this as a data type:

-- | A representation of a kind.
data Kind
  = -- | For types like `Int`.
    Type
  | -- | For types like `Option`.
    KindFun Kind Kind
  | -- | For polymorphic kinds.
    KindVar KindVar
  | -- | For closing over polymorphic kinds.
    KindScheme [KindVar] Kind
  deriving (Show, Eq, Data, Generic)

-- | A kind variable.
newtype KindVar = MkKindVar { getKindVar :: String }
  deriving (Show, Eq, Ord, Data, Generic)

These types represent the module of our language. During inference, we take a list of data types and a mapping from named types that might appear in these data types to their kinds, and we infer the kinds of these data types and return the data types annotated with their kinds, or we return an error if there was a problem.

We can capture this operation in this type signature:

infer :: Map.Map TypeName Kind -> [Datatype ()] -> Either Error [Datatype Kind]

Let's dive-in and see how we can implement infer.

Kind Inference

Our kind inference algorithm, like our previous type inference algorithm, has 6 important parts:

  1. Topologically order definitions and group those that depend on one another (which we will not cover here).
  2. Elaboration and constraint generation
  3. Constraint solving
  4. Instantiation
  5. Substitution
  6. Generalization

The general process is as follows: we sort and group definitions by their dependencies, we elaborate the data types by giving each type we meet a unique kind variable and collect constraints on those kind variables according to their usage and placement. We then solve these constraints using unification, instatiating the polymorphic kinds we run into, and create a substitution which is a mapping from kind variables to kinds. We then substitute the kind variables we gave to each type in the elaboration stage with their mapped kinds from the substitution in the data type definitions. Then we generalize the kinds of data type definitions and close over their free variables.

This looks somewhat like this:

-- | Infer the kind of a group of data types that should be solved together
--   (because they are mutually recursive).
infer :: Map.Map TypeName Kind -> [Datatype ()] -> Either Error [Datatype Kind]
infer kindEnv datatypes =
  -- initialize our `InferenceM` which is State + Except
  flip Mtl.evalState (initialState kindEnv) $ Mtl.runExceptT $ do
    -- Invent a kind variable for each data type
    for_ datatypes $ \(Datatype _ name _ _) -> do
      kindvar <- freshKindVar
      declareNamedType name kindvar
    -- Elaborate all of the data types
    datatypes' <- traverse elaborate datatypes
    -- Solve the constraints
    solveConstraints
    for datatypes' $ \(Datatype kindvar name vars variants) -> do
      -- Substitute the kind variable for a kind
      -- for the data type
      kind <- lookupKindVarInSubstitution kindvar
      -- ... and for all types
      variants' <- for variants $ traverse lookupKindVarInSubstitution
      -- generalize the data type's kind, and return.
      pure (Datatype (generalize kind) name vars variants')

Lets unpack all of that, step by step.

InferenceM

A couple of capabilities that are going to help us write less verbose code are managing State and throwing Exceptions. We will define a type that merges and provides these capabilities:

-- | We combine the capabilities of Except and State
--   For our kind inference code.
type InferenceM a = Mtl.ExceptT Error (Mtl.State State) a

And the types representing the errors we can throw, and the state we keep throughout the inference process:

-- | The errors that can be thrown in the process.
data Error
  = UnboundVar TypeVar
  | UnboundName TypeName
  | UnificationFailed Kind Kind
  | OccursCheckFailed (Maybe (Type ())) KindVar Kind
  deriving (Show)

-- | The state we keep during an inference cycle
data State = State
  { -- | Mapping from named types or type variables to kind variables.
    -- When we declare a new data type or a type variable, we'll add it here.
    -- When run into a type variable or a type name during elaboration,
    -- we search its matching kind here.
    env :: Map.Map (Either TypeName TypeVar) KindVar
  , -- | Mapping from existing named types to their kinds.
    -- Kinds for types that are supplied before the inference process can be found here.
    kindEnv :: Map.Map TypeName Kind
  , -- | Used for generating fresh kind variables.
    counter :: Int
  , -- | When we learn information about kinds during elaboration, we'll add it here.
    constraints :: [Constraint]
  , -- | The constraint solving process will generate this mapping from
    -- the kind variables we collected to the kind they should represent.
    -- If we don't find the kind variable in the substitution, that means
    -- it is a free variable we should close over.
    substitution :: Map.Map KindVar Kind
  }
  deriving (Show, Eq, Data, Generic)

-- | The state at the start of the process.
initialState :: Map.Map TypeName Kind -> State
initialState kindEnv =
  State mempty kindEnv 0 mempty mempty

-- | A constraint on kinds.
data Constraint
  = Equality Kind Kind
    -- ^ The two kinds should unify.
    -- If one of the kinds is a kind scheme, we will instantiate it, and
    -- add an equality constraint of the other kind with the instantiated kind.
  deriving (Show, Eq, Data, Generic)

We'll later write special utilities functions for interacting with this state when we run into them.

Elaboration and constraint generation

In this section we want to traverse a data type, annotate the types with fresh kind variables, and generate constraints according to the types' location and usage.

-- | Invent kind variables for types we don't know and add constraints
--   on them according to their usage.
elaborate :: Datatype () -> InferenceM (Datatype KindVar)
elaborate (Datatype _ datatypeName vars variants) = do
  -- We go over each of the data type parameters and
  -- generate a fresh kind variable for them.
  varKinds <- for vars $ \var -> do
    kindvar <- freshKindVar
    declareTypeVar var kindvar
    pure kindvar

  -- We go over the variants, elaborate each field,
  -- and return the elaborated variants.
  variants' <- for variants $ \(Variant name fields) -> do
    Variant name <$>
      for fields
        ( \field -> do
          field' <- elaborateType field
          -- a constraint on fields: their kind must be `Type`.
          newEqualityConstraint (KindVar $ getAnn field') Type
          pure field'
        )

  -- We grab the kind variable of the data type
  -- so we can add a constraint on it.
  datatypeKindvar <- lookupNameKindVar datatypeName
  -- A type of the form `T a b c ... =` has the kind:
  -- `aKind -> bKind -> cKind -> ... -> Type`.
  -- We add that as a constraint.
  let kind = foldr KindFun Type $ map KindVar varKinds
  newEqualityConstraint (KindVar datatypeKindvar) kind

  -- We return the elaborated data type after annotating
  -- all types with kind variables and generating constraints.
  pure (Datatype datatypeKindvar datatypeName vars variants')

There are a couple of utility functions we've used in the last snippet:

We generate fresh kind variables for type variables declare. After constraint solving we'll find the kind variable again and learn what the actual kind should be in its place.

We'll also save the kind variable we generated for the type variable in the environment, so we can find it later when it is used.

-- | Generate a fresh kind variables.
freshKindVar :: InferenceM KindVar
freshKindVar = do
  s <- Mtl.get
  let kindvar = MkKindVar ("k" <> show (counter s))
  Mtl.put s { counter = 1 + counter s }
  pure kindvar

-- | Insert declared type variables into the environment.
declareTypeVar :: TypeVar -> KindVar -> InferenceM ()
declareTypeVar var kindvar =
  Mtl.modify $ \s ->
    s { env = Map.insert (Right var) kindvar (env s) }

We've used freshKindVar and the following declareNamedType before in infer when we ran into the data type declaration.

-- | Insert declared type names into the environment.
declareNamedType :: TypeName -> KindVar -> InferenceM ()
declareNamedType name kindvar =
  Mtl.modify $ \s ->
    s { env = Map.insert (Left name) kindvar (env s) }

We also fetch the kind we annotated a type with using getAnn:

-- | Get the annotation of a type.
getAnn :: Type a -> a
getAnn = \case
  TypeVar a _ -> a
  TypeName a _ -> a
  TypeApp a _ _ -> a

Another important utility function is for adding constraints:

-- | Add a new equality constraint to the state.
newEqualityConstraint :: Kind -> Kind -> InferenceM ()
newEqualityConstraint k1 k2 =
  Mtl.modify $ \s ->
    s { constraints = Equality k1 k2 : constraints s }

Elaborating types

The next part is elaborating types. As a reminder, we support named types, type variables, and applications of a type to a type.

For type variables, we added them to the environment previously when we saw them declared. We look them up. If they are not there, that's an error.

-- | Find the kind variable of a type variable in the environment.
lookupVarKindVar :: TypeVar -> InferenceM KindVar
lookupVarKindVar var =
  maybe
    (Mtl.throwError $ UnboundVar var)
    pure
    . Map.lookup (Right var)
    . env =<< Mtl.get

For named types, either they were supplied to the inference stage, and in that case we invent a new kind variable for them for this particular use, or they were declared as part of this data types group, in which case we look them up in the environment.

-- | Find the kind variable of a named type in the environment.
lookupNameKindVar :: TypeName -> InferenceM KindVar
lookupNameKindVar name = do
  state <- Mtl.get
  -- We first look the named type in the supplied kind env.
  case Map.lookup name (kindEnv state) of
    -- If we find it, we generate a new kind variable for it
    -- and constraint it to be this type, so that each use has it own
    -- type variable (later used for instantiation).
    Just kind -> do
      kindvar <- freshKindVar
      newEqualityConstraint (KindVar kindvar) kind
      pure kindvar
    -- If it's not a supplied type, it means we are actively inferring it,
    -- and we need to use the same kind variable for all uses.
    -- We'll look it up in our environment of declared types.
    Nothing ->
      maybe
        -- If we still can't find it, we error.
        (Mtl.throwError $ UnboundName name)
        pure
        . Map.lookup (Left name)
        $ env state

And for a type application of t1 and t2, we elaborate both types, invent a kind variable for the type application, then constrain the applied type t1 to be equal to a kind that takes the kind of t2 and returns the kind of the type application.

This is the rest of the code for elaborating types:

-- | Elaborate a type with a kind variable and add constraints
--   according to usage.
elaborateType :: Type () -> InferenceM (Type KindVar)
elaborateType = \case
  -- for type variables and type names,
  -- we lookup the kind variables we generated when we ran into
  -- the declaration of them.
  TypeVar () var ->
    fmap (\kindvar -> TypeVar kindvar var) (lookupVarKindVar var)

  TypeName () name ->
    fmap (\kindvar -> TypeName kindvar name) (lookupNameKindVar name)

  -- for type application
  TypeApp () t1 t2 -> do
    -- we elaborate both types
    t1Kindvar <- elaborateType t1
    t2Kindvar <- elaborateType t2
    -- then we generate a kind variable for the type application
    typeAppKindvar <- freshKindVar
    -- then we constrain the type application kind variable
    -- it should unify with `t2Kind -> typeAppKind`.
    newEqualityConstraint
      (KindVar $ getAnn t1Kindvar)
      (KindFun (KindVar $ getAnn t2Kindvar) (KindVar typeAppKindvar))

    pure (TypeApp typeAppKindvar t1Kindvar t2Kindvar)

And that's it for the elaboration phase. After giving each type a kind variable and collecting some constraints about them, we are ready to the next stage where we can ignore the data type definition and focus on the constraints we generated.

Constraint solving and generating a substitution

In this phase we go one constraint at a time and decide whether it is trivial (equality between Type and Type), or if it needs to be reduced to simpler constraints that will be checked (like matching the two first parts and the two second parts of two KindFuns).

When we run into kind variables, we will substitute them with the other kind in the rest of the equality constraints and in a mapping we'll keep on the side which we'll call a "substitution" and keep going.

When we run into a kind scheme, we instantiate it (give it a new unique instance) and constrain it with the other kind in the constraint.

When we run into two kinds that cannot be unified (Type and KindFun), we throw an error.

When there are no more constraints left to solve, we are done and succeeded on our task!

-- | Solve constraints according to logic.
--   this process is iterative. We continue fetching
--   the next constraint and try to solve it.
--
--   Each step can either reduce or increase the number of constraints,
--   and we are done when there are no more constraints to solve,
--   or if we ran into a constraint that cannot be solved.
solveConstraints :: InferenceM ()
solveConstraints = do
  -- Pop the next constraint we should solve.
  constraint <- do
    c <- listToMaybe . constraints <$> Mtl.get
    Mtl.modify $ \s -> s { constraints = drop 1 $ constraints s }
    pure c

  case constraint of
    -- If we have two 'Type's, the unify. We can skip to the next constraint.
    Just (Equality Type Type) -> solveConstraints
    -- We have an equality between two kind functions.
    -- We add two new equality constraints matching the two firsts
    -- with the two seconds.
    Just (Equality (KindFun k1 k2) (KindFun k3 k4)) -> do
      Mtl.modify $ \s ->
        s { constraints = Equality k1 k3 : Equality k2 k4 : constraints s }
      solveConstraints
    -- When we run into a kind scheme, we instantiate it
    -- (we look at the kind and replace all closed kind variables
    -- with fresh kind variables), and add an equality constraint
    -- between the other kind and the instantiated kind.
    Just (Equality (KindScheme vars kind) k) -> do
      kind' <- instantiate kind vars
      Mtl.modify $ \s ->
        s { constraints = Equality k kind' : constraints s }
      solveConstraints
    -- Same as the previous scenario.
    Just (Equality k (KindScheme vars kind)) -> do
      kind' <- instantiate kind vars
      Mtl.modify $ \s ->
        s { constraints = Equality kind' k : constraints s }
      solveConstraints
    -- If we run into a kind variable on one of the sides,
    -- we replace all instances of it with the other kind and continue.
    Just (Equality (KindVar var) k) -> do
      replaceInState var k
      solveConstraints
    -- The same as the previous scenario, but the kind var is on the other side.
    Just (Equality k (KindVar var)) -> do
      replaceInState var k
      solveConstraints
    -- If we have an equality constraint between a 'Type' and
    -- a 'KindFun', we cannot unify the two, and unification fails.
    Just (Equality k1@Type k2@KindFun{}) -> Mtl.throwError (UnificationFailed k1 k2)
    Just (Equality k1@KindFun{} k2@Type) -> Mtl.throwError (UnificationFailed k1 k2)
    -- If there are no more constraints, we are done. Good job!
    Nothing -> pure ()

Let's talk about a few of these operations.

Instantiating kind schemes

When we run into a kind scheme (where a kind contains polymorphic kind variables) in a constraint, we actually want to work with an instance of that kind scheme. So we take the kind scheme and produce a kind where all of the type variables in it are fresh kind variables.

-- | Instantiate a kind.
--   We look at the kind and replace all closed kind variables
--   with fresh kind variables.
instantiate :: Kind -> [KindVar] -> InferenceM Kind
instantiate = foldM replaceKindVarWithFreshKindVar

-- | Replace a kind variable with a fresh variable in the kind.
replaceKindVarWithFreshKindVar :: Kind -> KindVar -> InferenceM Kind
replaceKindVarWithFreshKindVar kind var = do
  kindvar <- freshKindVar
  -- Uniplate.transformBi lets us perform reflection and
  -- apply a function to all instances of a certain type
  -- in a value. Think of it like `fmap`, but for any type.
  --
  -- It is a bit slow though, so it's worth replacing it with
  -- hand rolled recursion or a functor, but its convenient.
  pure $ flip Uniplate.transformBi kind $ \case
    kv | kv == var -> kindvar
    x -> x

Note: We are using the uniplate with the interface that works for every type that has an instance of Data from Data.Data. It lets us use generic traversals and transformations with very little effort, but it is fairly slower than hand-writting things so for real kind inference implementation you probably want to hand-write the traversals.

Replacing kind variables

When we run into a kind variable, we replace it with the kind on the other side of the equality constraint in the substitution and in the rest of the constraints, and we then add it to the substitution.

We change it in the rest of the constraints so that if we have the following two constraints:

1. Equality (KindVar "k1") Type
2. Equality (KindVar "k1") (Type -> Type)

When we replace KindVar "k1" with Type in the rest of the constraints, instead of the next constraint being (2), it will be:

Equality Type (Type -> Type)

Which does not unify, and we catch the bug.

We also change the kind variable in the substitution and later add it to it so we can later look up the kind variable we placed on each type in the elaboration phase and find their kinds.

-- | Replace every instance of 'KindVar var' in our state with 'kind'.
--   And add it to the substitution.
replaceInState :: KindVar -> Kind -> InferenceM ()
replaceInState var kind = do
  occursCheck var kind
  s <- Mtl.get
  let
    -- Uniplate.transformBi lets us perform reflection and
    -- apply a function to all instances of a certain type
    -- in a value. Think of it like `fmap`, but for any type.
    --
    -- Note that we are changing all instances of `Kind` of the form
    -- `KindVar v | v == var` in all of `State`! This includes both the
    -- `substitution` and the remaining `constraints`,
    --
    -- It is a bit slow though, so it's worth replacing it with
    -- hand rolled recursion or a functor, but its convenient.
    s' =
      flip Uniplate.transformBi s $ \case
        KindVar v | v == var -> kind
        x -> x
  Mtl.put $ s' { substitution = Map.insert var kind (substitution s') }

But one important thing we need to check about the kind variable and the kind is that the kind does not contain the kind variable, which means we have an "infinite" kind. This is called an occurs check.

-- | We check that the kind variable does not appear in the kind
--   and throw an error if it does.
occursCheck :: KindVar -> Kind -> InferenceM ()
occursCheck var kind =
  if KindVar var == kind || null [ () | KindVar v <- Uniplate.universe kind, var == v ]
    then pure ()
    else do
      -- We try to find the type of the kind variable by doing reverse lookup,
      -- but this might not succeed before the kind variable might be generated
      -- during constraint solving.
      -- We might be able to find the type if we look at the substitution as well,
      -- but for now lets leave it at this "best effort" attempt.
      reverseEnv <- map swap . Map.toList . env <$> Mtl.get
      let typ = either (TypeName ()) (TypeVar ()) <$> lookup var reverseEnv
      Mtl.throwError (OccursCheckFailed typ var kind)

Once again we use the universe function from the uniplate library, which returns all values of the same type that appear in a value, so we find all of the kinds inside our kind and select the kind variables specifically.

Other errors

If we run into an equality constraint between a Type and a KindFun, we throw an error, since we can't unify them.

Substitution

Once we finish with constraint solving, we'll have a substitution ready for us in State.

All we need to do now is look up the kind produced by the substitution for each kind variable.

-- | Look up what the kind of a kind variable is in the substitution
--   produced by constraint solving.
--   If there was no constraint on the kind variable, it won't appear
--   in the substitution, which means it can stay a kind variable which
--   we will close over later.
lookupKindVarInSubstitution :: KindVar -> InferenceM Kind
lookupKindVarInSubstitution kindvar =
  maybe (KindVar kindvar) id . Map.lookup kindvar . substitution <$> Mtl.get

Generalization

When we are done with elaborating, solving constraints, and substituting over data types, we need to look at the kind produced for each data type, and close over the free kind variables.

Again, using Uniplate.universe to find all of the type variables and include them in the kind scheme.

-- | Close over kind variables we did not solve.
generalize :: Kind -> Kind
generalize kind = KindScheme [var | KindVar var <- Uniplate.universe kind] kind

Examples

That's pretty much it! We can now define data types and observe the kinds we produce.

option :: Datatype ()
option =
  Datatype ()
    (MkTypeName "Option")
    [MkTypeVar "a"]
    [ Variant "Some" [TypeVar () $ MkTypeVar "a"]
    , Variant "None" []
    ]

main :: IO ()
main = do
  print $ map dtAnn <$> infer mempty [option]

Will output:

Right [KindScheme [] (KindFun Type Type)]

As promised, our kind inference engine is able to infer the kind of this type:

Cofree f a =
  | Cofree a (f (Cofree f a))

As expected:

Cofree : (Type -> Type) -> Type -> Type
Cofree f a =
  | Cofree a (f (Cofree f a))

And can catch errors such as:

Tree a =
  | Node a Tree Tree

And produce the error:

Unification failed between the following kinds:
  * k1 -> Type
  * Type

You can find other examples in the gist.

Summary

Kind inference using unification-based constraint solving works on data types the in same way type inference with the same methods works on expressions. While it can be a bit tricky, implementing a somewhat powerful kind inference engine is relatively straightforward.

You can find the source code in this gist. It includes pretty printing code, additional examples, and a lot of comments.

September 30, 2023 12:00 AM

September 29, 2023

GHC Developer Blog

GHC 9.8.1-rc1 is now available

GHC 9.8.1-rc1 is now available

bgamari - 2023-09-29

The GHC developers are very pleased to announce the availability of the release candidate of GHC 9.8.1. Binary distributions, source distributions, and documentation are available at downloads.haskell.org.

GHC 9.8 will bring a number of new features and improvements, including:

  • Preliminary support the TypeAbstractions language extension, allowing types to be bound in type declarations TypeAbstractions.

  • Support for the ExtendedLiterals extension, providing syntax for non-word-sized numeric literals in the surface language extended-literals

  • Improved rewrite rule matching behavior, allowing limited matching of higher-order patterns

  • Better support for user-defined warnings by way of the WARNING pragma warnings

  • The introduction of the new GHC.TypeError.Unsatisfiable constraint, allowing more predictable user-defined type errors unsatisfiable

  • Implementation of the export deprecation proposal, allowing module exports to be marked with DEPRECATE pragmas deprecated-exports

  • The addition of build semaphore support for parallel compilation; with coming support in cabal-install this will allow better use of parallelism in multi-package builds jsem

  • More efficient representation of info table provenance information, reducing binary sizes by over 50% in some cases when -finfo-table-map is in use

A full accounting of changes can be found in the release notes. This candidate includes roughly 20 new commits relative to alpha 4, including what we believe should be nearly the last changes to GHC’s boot libraries. As always, GHC’s release status can be found on the GHC Wiki status.

We would like to thank GitHub, IOG, the Zw3rk stake pool, Well-Typed, Tweag I/O, Serokell, Equinix, SimSpace, the Haskell Foundation, and other anonymous contributors whose on-going financial and in-kind support has facilitated GHC maintenance and release management over the years. Finally, this release would not have been possible without the hundreds of open-source contributors whose work comprise this release.

As always, do give this release a try and open a ticket if you see anything amiss.

Happy compiling,

Ben

by ghc-devs at September 29, 2023 12:00 AM

September 25, 2023

GHC Developer Blog

GHC 9.6.3 is now available

GHC 9.6.3 is now available

Bryan Richter - 2023-09-25

The GHC developers are happy to announce the availability of GHC 9.6.3. Binary distributions, source distributions, and documentation are available on the release page.

This release is primarily a bugfix release addressing a few issues found in the 9.6 series. These include:

  • Disable Polymorphic Specialisation (a performance optimisation) by default. It was discovered that Polymorphic Specialisation as currently implemented in GHC can lead to hard to diagnose bugs resulting in incorrect runtime results. Users wishing to use this optimisation despite the caveats will now have to explicitly enable the new -fpolymorphic-specialisation flag. For more details see #23469 as well as #23109, #21229, #23445.

  • Improve compile time and code generation performance when -finfo-table-map is enabled (#23103).

  • Make the recompilation check more robust when code generation flags are changed (#23369).

  • Addition of memory barriers that improve soundness on platforms with weak memory ordering.

  • And dozens of other fixes.

A full accounting of changes can be found in the release notes. As some of the fixed issues do affect correctness users are encouraged to upgrade promptly.

We would like to thank Microsoft Azure, GitHub, IOG, the Zw3rk stake pool, Well-Typed, Tweag I/O, Serokell, Equinix, SimSpace, Haskell Foundation, and other anonymous contributors whose on-going financial and in-kind support has facilitated GHC maintenance and release management over the years. Finally, this release would not have been possible without the hundreds of open-source contributors whose work comprise this release.

As always, do give this release a try and open a ticket if you see anything amiss.

Enjoy!

-Bryan

by ghc-devs at September 25, 2023 12:00 AM

September 24, 2023

Magnus Therning

Defining a formatter for Cabal files

For Haskell code I can use lsp-format-buffer and lsp-format-region to keep my file looking nice, but I've never found a function for doing the same for Cabal files. There's a nice command line tool, cabal-fmt, for doing it, but it means having to jump to a terminal. It would of course be nicer to satisfy my needs for aesthetics directly from Emacs. A few times I've thought of writing the function myself, I mean how hard can it be? But then I've forgotten about it until then next time I'm editing a Cabal file.

A few days ago I noticed emacs-reformatter popping up in my feeds. That removed all reasons to procrastinate. It turned out to be very easy to set up.

The package doesn't have a recipe for straight.el so it needs a :straight section. Also, the naming of the file in the package doesn't fit the package name, hence the slightly different name in the use-package declaration:1

(use-package reformatter
  :straight (:host github
             :repo "purcell/emacs-reformatter"))

Now the formatter can be defined

(reformatter-define cabal-format
  :program "cabal-fmt"
  :args '("/dev/stdin"))

in order to create functions for formatting, cabal-format-buffer and cabal-format-region, as well as a minor mode for formatting on saving a Cabal file.

Footnotes:

1

I'm sure it's possible to use :files to deal with this, but I'm not sure how and my naive guess failed. It's OK to be like this until I figure it out properly.

September 24, 2023 08:20 AM

September 21, 2023

Tweag I/O

Behind the scenes with FawltyDeps v0.13.0: Matching imports with dependencies

We have previously introduced FawltyDeps, a tool to help Python projects avoid the dreaded, and seemingly unavoidable state, where dependencies declared in the configuration do not match those actually imported in the code1. FawltyDeps is the perfect addition to your CI, your pre-commit hooks, or your dependency management arsenal.

Curious to know how FawltyDeps works its magic? In this sequel we’ll delve into an essential component of FawltyDeps: how it matches imports and dependencies behind the scenes, and why it is important to get this matching right.

We’ve been busy working on an improved mapping strategy that combines versatility with simplicity, and we have come a long way from the quite limited version we presented in our first announcement. By the end of this post, you’ll have a solid understanding of FawltyDeps’ brand new mapping options and how to tailor them to your project’s unique context and needs.

Matching imports and dependencies

Simply put, FawltyDeps extracts imports from your code, and dependencies declared in your project configuration, and matches them against each other:

  • the imports that are not present in your declared dependencies are reported as undeclared dependencies
  • the declared dependencies that are not imported in your code are reported as unused dependencies.
extracting matching imports deps
Figure 1. An illustration of extracting imports and dependencies in a Python project and matching them to each other.

When matching imports and dependencies, we first assume that a dependency (specifically: the package it references) and an import have the same name. This approximation works well for many Python packages. numpy is a good example: in your code, you write import numpy, and to install it you run pip install numpy, or you list numpy in your requirements.txt (or wherever you list your project dependencies).

Problem solved! So why are we even writing this post?

It turns out that, as always, things are not that simple™. Many packages provide import names that are different from the package name. For example:

  • You depend on the pyyaml package, but you import yaml (as seen in Figure 1).
  • You depend on the scikit-learn package, but you import sklearn.
  • You depend on the setuptools package, but you import either pkg_resources or some other import, as setuptools exposes multiple imports.

Clearly our first approximation (hereafter referred to as the identity mapping) is not good enough. To solve this, we need a smarter mapping: a way to figure out which packages correspond to which imports. In practice, there are a few different ways to acquire these mappings, each having its advantages and limitations. Our main goal here is to lay out the mappings we support in FawltyDeps, and explain how they can be used individually or together to resolve packages into their respective imports.

Mapping from already-installed packages

Arguably, the only correct way for FawltyDeps to match packages to imports is to actually ask each package what imports it provides. FawltyDeps can do this3, but it first needs to find where the packages are installed, and that turns out to be more complicated than one might think.

In the first versions of FawltyDeps, we had not yet properly drilled into this issue. Instead, we only looked at the Python environment in which FawltyDeps itself was already running, and we simply assumed that your project dependencies should be installed into the same environment 4. If a dependency of your project was not found in this environment, we would fall back to the identity mapping.

This meant simply pushing the problem onto the user, however, and making FawltyDeps harder to use. What we wanted instead was for FawltyDeps to resolve the dependencies wherever they may be installed. This is where things can get very complicated: In general, there is a bewildering variety of ways to install dependencies in the Python world.

pandora's box of Python's packaging and dependency management

We are not going to open the entire Pandora’s box of Python packaging and dependency management in this blog post, except as to note some different examples of where Python packages (specifically: your project’s 3rd-party dependencies) can typically be found:

  • System-wide package locations, like those found under /usr/lib/python* or /usr/local/lib/python* (whether installed by your system’s package manager or system-wide pip install).
  • User-specific packages, installed by tools like pipx install or pip install --user.
  • Virtual environments (from venv, virtualenv, Poetry, PDM, etc.), located either within your project, or somewhere else.
  • Other, less common, methods or locations5 that resemble any of the above.

We would like to have FawltyDeps work with as many of these as possible, and furthermore, when it’s possible: to have FawltyDeps automatically discover and use them by default.

As of v0.13.0 we have come a long way towards realizing this vision: We support the kinds of Python environments mentioned above (for FawltyDeps’ purpose, a “Python environment” really means any directory in which Python packages could be installed), and the following diagram outlines how FawltyDeps determines which Python environments are used to look up the project’s dependencies:

finding local pyenvs
Figure 2. FawltyDeps’ strategy for finding local Python environments

In other words:

  • The --pyenv option lets you point to one or more Python environments. All of these environments will be used when matching dependencies to imports
  • If --pyenv is not used, FawltyDeps will automatically find and use Python environments that exist within your project directories (i.e. within any directory that is passed as a positional argument to FawltyDeps, aka. “basepath”, or the current directory by default).
  • If no Python environment is found by the two methods above, FawltyDeps will fall back to using the environment in which it’s running.

There is still some way to go until all the details are perfect here6, but we believe this approach covers most common cases well.

Temporarily installing dependencies to complete the mapping

There is an elephant in the room that we have not yet talked about: Sometimes you may be running FawltyDeps on a project where the project dependencies are not installed at all! Then what can you do? (Assuming that you don’t want to go through the bother of installing packages manually.) Until recently FawltyDeps would simply fall back to the identity mapping for any packages that it could not find locally, with the undeclared/unused report provided by FawltyDeps suffering as a result.

With the new --install-deps option introduced in v0.13.0, we are now able to provide a better alternative: With this option FawltyDeps will not fall back to the identity mapping, instead it will automatically use pip install to install the unresolved dependencies (from PyPI, by default7) into a temporary virtualenv8, and it will then use this as an additional source for the dependency-to-import mapping. For dependencies that are not found locally, this allows FawltyDeps to come up with the correct mapping (and hence produce a much better undeclared/unused report) rather than relying on the imperfect identity mapping.

Since this is a potentially expensive strategy we have chosen to hide it behind the --install-deps command-line option. If you want to always enable this option, you can set the corresponding install_deps configuration variable to true in the [tool.fawltydeps] section of your pyproject.toml.

Note that there is no guarantee that we’re able to resolve all dependencies with this method: For example, there could be a typo in your declared dependency that means it will never be found on PyPI, or there could be other circumstances (e.g. network issues) that prevent this strategy from working at all. What happens with such unresolved dependencies will be covered below.

User-defined mapping

The mappings discussed above have FawltyDeps look into packages that are actually installed (whether in an existing local environment or temporarily by FawltyDeps). But this might not always be achievable in practice. You might want to run FawltyDeps in your CI, possibly on multiple libraries, without having to either set up a local environment or access packages from outside sources (like PyPI).

A simple solution to this is to provide FawltyDeps with your own custom mapping.9 We have chosen not to ship any database with the code as it needs to be frequently updated, with no guarantee of it covering all Python packages. Instead, we allow users to provide their own custom TOML mapping. This mapping does not have to be complete and it can be used in conjunction with the other mappings discussed in this article. We talk more about how FawltyDeps combines different mappings in the following section.

Putting it together: FawltyDeps’ mapping strategy

Now that we have gathered all these mappings, let’s see how to best combine them.

Overall, we have three guiding principles in this endeavor:

  • Completeness: we should be able to resolve all dependencies extracted from a project into associated import names, as otherwise we cannot reach any conclusions about undeclared or unused dependencies.
  • Correctness: some mappings offer a higher level of correctness than others. Identity mapping, for example, is correct for many - but certainly not all - packages. Resolving a dependency via a locally installed package offers a higher guarantee of correctness.
  • Transparency: we should be able to trace back what mapping was used to resolve any given dependency. This allows users to discover where they may improve the information passed to FawltyDeps (e.g. using --pyenv to point at the most appropriate Python environments). It also makes it much easier for us to diagnose where FawltyDeps itself might be improved.

First, let’s start by repeating our available strategies:

  • Identity mapping: The simplest strategy, but also the worst. We would like to avoid using it as much as possible.
  • Looking at locally installed packages: Our best option in terms of correctness, but not always complete: sometimes we have to concede that not all dependencies are available in a local Python environment, so we still need a fallback strategy.
  • Installing packages (from PyPI) into a temporary virtualenv: The ultimate fallback solution, but quite heavy-weight, and not always suitable (e.g. in a restricted CI environment). Hence, we put this behavior behind the --install-deps option.
  • Custom/user-defined mapping: Allow the user to have the final say in how dependencies are mapped into imports. This strategy should override the other strategies, but we expect few users will want to go through the fuss of defining their own mapping, so we cannot rely on this being used commonly.

Now, we need to figure out how to combine these strategies in the best way.

We have chosen to organize them in the sequence shown in Figure 3 below. Each strategy - when given the name of a dependency - can either return a successful mapping of that dependency name (into a corresponding set of import names), or return nothing (when a dependency is not found by that strategy). Dependencies that are not resolved by a strategy are passed onto the next strategy in the sequence. Since a dependency is mapped by only one strategy, that is, the first that returns something, we need to organize our strategies in order of decreasing preference. In other words:

  • The user-defined mapping, when provided, should always override other mappings. It thus comes first in the sequence.
  • Next, we want to look at the locally installed packages.
  • Finally, if we have not been able to find the dependency in either of the above, we want to use a fallback strategy:
    • If the user has enabled --install-deps, we attempt to install packages (subject to pip configuration, but from PyPI by default). If any of these packages fail to install, we abort the entire process and raise an error, as we do not expect the user wants a further fallback to the inaccurate identity mapping.
    • Otherwise, our fallback is the identity mapping, that is, we assume any unresolved dependency points to a package (as yet unseen) that provides a single import of the same name. Although this strategy is always “successful” (in terms of mapping to an import name), it is crucially not always correct!

To illustrate:

fawltydeps resolvers sequence
Figure 3. The sequence of resolvers used by FawltyDeps

To bring this back into the overall context of FawltyDeps: once we have resolved the dependencies through the above mapping strategies, we now have an overall mapping of dependency names to provided import names, and this is the basis for the final report:

  • Any import found in the project that is not covered by any dependency is reported as an undeclared dependency.
  • Any dependency found to only provide imports that are never imported from anywhere is reported as a possibly unused dependency.

The table below provides a summary of the available mappings, sorted in the order FawltyDeps processes them, along with options to customize them.

Priority Mapping strategy Options
1 User-defined mapping Provide a custom mapping in TOML format via --custom-mapping-file or a [tool.fawltydeps.custom_mapping] section in pyproject.toml.
Default: No custom mapping
2 Mapping from installed packages Point to one or more environments via --pyenv.
Default: auto-discovery of Python environments under the project’s basepath. If none are found, default to the Python environment in which FawltyDeps itself is installed.
3a Mapping via temporary installation of packages Activated with the --install-deps option.
3b Identity mapping Active by default.
Deactivated when --install-deps is used.

Examples

This section dives into some practical scenarios. Suppose you have a simple requirements.txt file:

numpy>=1.25.0
scikit-learn
pyyaml

We assume that these packages are already imported in some_script.py as

import numpy
import sklearn
import yaml

As we can see, our project has defined all its dependencies as it should, so FawltyDeps should ideally not report any problems. But let’s also assume that we’re running FawltyDeps in an incomplete environment - one where pyyaml is not installed - to see how this affects FawltyDeps.

Example 1: running with default options

When running with default options, like so:

fawltydeps

FawltyDeps will run through the default sequence of mappings, as shown in Figure 4:

resolving default scenario
Figure 4. A scenario where FawltyDeps resolves a requirements.txt file with default options

In particular:

  • No custom mapping is provided.
  • FawltyDeps automatically finds local environments or defaults to its own environment. In this example it finds scikit-learn and numpy in the local environment, and we can see that scikit-learn is correctly resolved to the sklearn import name.
  • Identity mapping is used to resolve any dependencies not resolved via previous mappers. In this example, pyyaml was not found above, and was therefore incorrectly resolved by the identity mapping to pyyaml.

The resulting output from FawltyDeps is:

These imports appear to be undeclared dependencies:
- 'yaml'

These dependencies appear to be unused (i.e. not imported):
- 'pyyaml'

For a more verbose report re-run with the `--detailed` option.

This first example shows a common pitfall of the identity mapping. Next, we can see how --install-deps improves on these situations:

Example 2: running with custom options

Let’s now take advantage of some advanced FawltyDeps options by running the following command:

fawltydeps --custom-mapping-file my_mapping.toml --pyenv venv --install-deps

Figure 5 shows the path FawltyDeps takes through the sequence of mappings:

resolving custom scenario
Figure 5. A scenario in which a requirements.txt file is resolved with a customized mapping configuration.

In particular:

  • We provide a partial custom mapping. (e.g. via --custom-mapping-file). In this example, the custom mapping is defined in my_mapping.toml.
    scikit-learn = ["sklearn"]
  • We point to a local virtual environment (with --pyenv) where some dependencies are installed. (In this example, only numpy is installed in venv.)
  • We pass --install-deps, to ask FawltyDeps to temporarily install and resolve any remaining dependencies.

FawltyDeps returns the following result:

No undeclared or unused dependencies detected.

As expected, FawltyDeps now returns a better result:

The --install-deps option downloads the pyyaml PyPI package and makes it available to the resolver, so it can now map the yaml import to the correct pyyaml dependency declaration.

Customizing your FawltyDeps’ mappers

These examples demonstrate two extremes and we expect most usage to fall somewhere in between.

With the --json flag, the resulting package-to-imports mapping is exposed in the output under the .resolved_deps key. Using a command like this:

fawltydeps --custom-mapping-file my_mapping.toml --pyenv venv --install-deps --json | jq .resolved_deps

you can see which mappings are used to resolve a package into a set of imports, and further iterate on the mapping options to help FawltyDeps perform its best on your codebase.

Conclusion

FawltyDeps has come a long way from the version we presented in our first announcement. While it was initially limited to resolving packages from its own environment and falling back to the identity mapping, it now supports arbitrary local environments, custom user mappings and it can temporarily install and resolve packages on its own. On top of that, it can also automatically discover virtual environments inside the analyzed project.2

We strive to provide a default behavior that makes sense for most projects, and to offer a customizable yet simple interface for advanced users that wish to take control over the mapping process. We believe the result is a powerful tool that delivers a complete, correct and transparent matching of your project’s dependencies and imports.

As always, we would be happy to hear your feedback! Try out the latest version of FawltyDeps and reach out to us with any problems or questions on our Github repository.


  1. The recent publication of Computational reproducibility of Jupyter notebooks from biomedical publications highlights that missing dependencies is a frequent occurrence in repositories hosting scientific computational experiments and has a detrimental effect on reproducibility.

  2. We depend on functionality from the excellent importlib_metadata library to extract imports exposed in locally installed packages.

  3. This assumption was made no matter whether FawltyDeps was installed in a virtualenv or as part of the system-wide Python installation, and we only documented that FawltyDeps had to be installed into the same environment as your project dependencies. One example of where this did not work out well is when you installed FawltyDeps with pipx install fawltydeps: This makes fawltydeps available everywhere (via your $PATH), but pipx installs it into its own, separate, virtualenv that is isolated from your project, meaning that FawltyDeps would almost always fall back to the identity mapping, and yield poor results.

  4. Some less common locations of Python packages:

    • __pypackages__ directories (even though PEP582 was recently rejected, these still occur in the wild).
    • Conda and other environment managers (not yet explicitly supported, although it’s on our radar).
    • Nix closures containing Python packages, like those produced by poetry2nix.

  5. One open issue is that FawltyDeps currently does not look at package versions. This usually does not cause problems in practice, but there are corner cases where it might: Consider, for example, a package_foo that used to provide two import names module_a and module_b, but starting from version 2, it only provides module_a. Now, if your project declares a dependency on package_foo>=2, but you still happen to import module_b in your code, this should be reported by FawltyDeps as an undeclared dependency (because you’re declaring a dependency on a version of package_foo where module_b no longer exists). However, if package_foo version 1 (not version 2) happens to be installed in your project’s environment, FawltyDeps will simply believe that package_foo (whichever version) provides both module_a and module_b, and the error won’t be flagged.

  6. To customize automatic installation (for example, to use a different package index), you can use pip’s environment variables.

  7. Note that the PyPI API does not currently expose imports of the hosted packages (see here and here for relevant discussions). Downloading and unpacking these packages is therefore necessary.

  8. Some tools rely on custom mappings. A notable example is the Pants build system, which relies on static mappings provided by the user. Another example is the pipreqs library, which keeps a static database mapping packages to the import names they expose.

  9. For completeness, here is an overview of the changes we’ve made to our mapping strategy over the last releases, and that together realize the picture presented in this blog post:

    • v0.7 introduces the --pyenv option to allow FawltyDeps to look up packages in a different Python environment than the one in which FawltyDeps is running.
    • v0.9 adds the user-defined mapping.
    • v0.10 adds support for __pypackages__ directories.
    • v0.11 introduces support for multiple --pyenv options.
    • v0.12 revamps our project traversal, allowing Python environments to be automatically found inside the project.
    • v0.13 introduces the --install-deps option allowing missing project dependencies to be mapped correctly instead of using the identity mapping.

September 21, 2023 12:00 AM

September 20, 2023

Joey Hess

Haskell webassembly in the browser


live demo

As far as I know this is the first Haskell program compiled to Webassembly (WASM) with mainline ghc and using the browser DOM.

ghc's WASM backend is solid, but it only provides very low-level FFI bindings when used in the browser. Ints and pointers to WASM memory. (See here for details and for instructions on getting the ghc WASM toolchain I used.)

I imagine that in the future, WASM code will interface with the DOM by using a WASI "world" that defines a complete API (and browsers won't include Javascript engines anymore). But currently, WASM can't do anything in a browser without calling back to Javascript.

For this project, I needed 63 lines of (reusable) javascript (here). Plus another 18 to bootstrap running the WASM program (here). (Also browser_wasi_shim)

But let's start with the Haskell code. A simple program to pop up an alert in the browser looks like this:

{-# LANGUAGE OverloadedStrings #-}

import Wasmjsbridge

foreign export ccall hello :: IO ()

hello :: IO ()
hello = do
    alert <- get_js_object_method "window" "alert"
    call_js_function_ByteString_Void alert "hello, world!"

A larger program that draws on the canvas and generated the image above is here.

The Haskell side of the FFI interface is a bunch of fairly mechanical functions like this:

foreign import ccall unsafe "call_js_function_string_void"
    _call_js_function_string_void :: Int -> CString -> Int -> IO ()

call_js_function_ByteString_Void :: JSFunction -> B.ByteString -> IO ()
call_js_function_ByteString_Void (JSFunction n) b =
      BU.unsafeUseAsCStringLen b $ \(buf, len) ->
                _call_js_function_string_void n buf len

Many more would need to be added, or generated, to continue down this path to complete coverage of all data types. All in all it's 64 lines of code so far (here).

Also a C shim is needed, that imports from WASI modules and provides C functions that are used by the Haskell FFI. It looks like this:

void _call_js_function_string_void(uint32_t fn, uint8_t *buf, uint32_t len) __attribute__((
        __import_module__("wasmjsbridge"),
        __import_name__("call_js_function_string_void")
));

void call_js_function_string_void(uint32_t fn, uint8_t *buf, uint32_t len) {
        _call_js_function_string_void(fn, buf, len);
}

Another 64 lines of code for that (here). I found this pattern in Joachim Breitner's haskell-on-fastly and copied it rather blindly.

Finally, the Javascript that gets run for that is:

call_js_function_string_void(n, b, sz) {
    const fn = globalThis.wasmjsbridge_functionmap.get(n);
    const buffer = globalThis.wasmjsbridge_exports.memory.buffer;
    fn(decoder.decode(new Uint8Array(buffer, b, sz)));
},

Notice that this gets an identifier representing the javascript function to run, which might be any method of any object. It looks it up in a map and runs it. And the ByteString that got passed from Haskell has to be decoded to a javascript string.

In the Haskell program above, the function is document.alert. Why not pass a ByteString with that through the FFI? Well, you could. But then it would have to eval it. That would make running WASM in the browser be evaling Javascript every time it calls a function. That does not seem like a good idea if the goal is speed. GHC's javascript backend does use Javascript`FFI snippets like that, but there they get pasted into the generated Javascript hairball, so no eval is needed.

So my code has things like get_js_object_method that look up things like Javascript functions and generate identifiers. It also has this:

call_js_function_ByteString_Object :: JSFunction -> B.ByteString -> IO JSObject

Which can be used to call things like document.getElementById that return a javascript object:

getElementById <- get_js_object_method (JSObjectName "document") "getElementById"
canvas <- call_js_function_ByteString_Object getElementById "myCanvas"

Here's the Javascript called by get_js_object_method. It generates a Javascript function that will be used to call the desired method of the object, and allocates an identifier for it, and returns that to the caller.

get_js_objectname_method(ob, osz, nb, nsz) {
    const buffer = globalThis.wasmjsbridge_exports.memory.buffer;
    const objname = decoder.decode(new Uint8Array(buffer, ob, osz));
    const funcname = decoder.decode(new Uint8Array(buffer, nb, nsz));
    const func = function (...args) { return globalThis[objname][funcname](...args) };
    const n = globalThis.wasmjsbridge_counter + 1;
    globalThis.wasmjsbridge_counter = n;
    globalThis.wasmjsbridge_functionmap.set(n, func);
    return n;
},

This does mean that every time a Javascript function id is looked up, some more memory is used on the Javascript side. For more serious uses of this, something would need to be done about that. Lots of other stuff like object value getting and setting is also not implemented, there's no support yet for callbacks, and so on. Still, I'm happy where this has gotten to after 12 hours of work on it.

I might release the reusable parts of this as a Haskell library, although it seems likely that ongoing development of ghc will make it obsolete. In the meantime, clone the git repo to have a play with it.


This blog post was sponsored by unqueued on Patreon.

September 20, 2023 01:48 PM

September 19, 2023

Gabriella Gonzalez

GHC plugin for HLint

GHC plugin for HLint

At work I was recently experimenting with running hlint (the widely used Haskell linting program) as a GHC plugin. One reason why I was interested in this is because we have a large (6000+ module) Haskell codebase at work, and I wanted to see if this would make it cheaper to run hlint on our codebase. Ultimately it did not work out but I built something that we could open source so I polished it up and released it in case other people find it useful. You can find the plugin (named hlint-plugin) on Hackage and on GitHub.

This post will explain the background and motivation behind this work to explain why such a plugin might be potentially useful to other Haskell users.

Introduction to hlint

If you’ve never heard of hlint before, it’s a Haskell source code linting tool that is pretty widely used in the Haskell ecosystem. For example, if you run hlint on the following Haskell file:

main :: IO ()
main = (mempty)

… then you’ll get the following hlint error message:

Main.hs:2:8-15: Warning: Redundant bracket
Found:
  (mempty)
Perhaps:
  mempty
  
1 hint

… telling the user to remove the parentheses1 from around the mempty.

Integrating hlint

However, hlint is a tool that is not integrated into the compiler, meaning that you have to run it out of band from compilation for it to catch errors. There are a few ways that one can fix this, though:

  • Create a script that builds your program and then runs hlint

    This is the simplest possible thing that one can do, but it works and some people do this. It’s the “low-tech” solution.

  • Use haskell-language-server or some IDE that plugin that auto-runs hlint

    This is a bit nicer for developers because now they can get rapid feedback (in their editor) as they are authoring the code. For example, haskell-language-server supports an hlint plugin2 for this purpose.

  • A GHC plugin (what this post is about)

    If you turn hlint into a GHC plugin, then ALL GHC-based Haskell tools automatically incorporate hlint suggestions. For example, ghcid would automatically include hlint suggestions in its output, something that doesn’t work with other approaches to integrate hlint. Similarly, all cabal commands (including cabal build and cabal repl) and all stack commands benefit from a GHC plugin.

Alternatives

I’m not the first person who had this idea of turning hlint into a GHC plugin. The first attempt to do this was hlint-source-plugin, but that was a pretty low-tech solution; it basically ran hlint as an executable on the Haskell source file being processed even though the GHC plugin already has access to the parsed syntax tree.

The second attempt was the splint package. This GHC plugin was really well done (it’s basically exactly how I envisioned this was supposed to work) and the corresponding announcement post does a great job of motivating why hlint benefits from being run as a GHC plugin.

However, the problem is that the splint package was recently abandoned and the last version of GHC it supports is GHC 9.2. Since we use GHC 9.6 at work I decided to essentially revive the splint package so I created the hlint-plugin package which is essentially the successor to splint.

Improvements

hlint-plugin is not too different from what splint did, but the main improvements that hlint-plugin brings are:

  • Support for newer versions of GHC

    splint supports GHC versions 8.10, 9.0, and 9.2 whereas hlint-plugin supports GHC versions 9.0, 9.2, 9.4, and 9.6.

  • Known-good cabal/stack/nix builds for the plugin

    … see the next section for more details.

  • A test suite to verify that the plugin works

    hlint-plugin’s CI actually checks that the plugin works for all supported versions of GHC.

  • A simpler work-around to GHC issue #18261

    Basically, I independently stumbled upon the exact same problem that splint encountered, but worked around it in a simpler way. I won’t go into too much detail here other than to point out that you can compare how splint works around this bug with how hlint-plugin works around the bug.

Also, when stress testing hlint-plugin on our internal codebase I discovered an hlint bug which affected some of our modules, and fixed that, so the fix will be in the next release of hlint.

Tricky build stuff

Unfortunately, both splint and hlint-plugin are tricky to correctly install. Why? Because, by default hlint (and ghc-lib-parser-ex) use the ghc-lib and ghc-lib-parser packages by default instead of the ghc API. This is actually a pain in the ass because a GHC plugin needs to be created using the ghc API (i.e. it needs to be a value of type ghc:GHC.Plugins.Plugin). Like, you can use hlint to create a ghc-lib:GHC.Plugins.Plugin and everything will type-check and build, but then when you try to actually run the plugin it will fail.

There is a way to get hlint and ghc-lib-parser-ex to use the ghc API, though! However, you have to build them with non-default cabal configure flags. Specifically, you have to configure hlint with the -f-ghc-lib option and configure ghc-lib-parser-ex with the -fno-ghc-lib option.

To ease things for users I provided a cabal.project file and a flake.nix file4 with working builds for hlint-plugin that set all the correct configuration options.

Performance

I mentioned in the introduction that I was hoping for some performance improvements from switching to a plugin but those improvements didn’t materialize. I’ll talk a bit about what I thought would work and why it didn’t pan out for us (even though it still might help for you).

So there are up to three ways that hlint could potentially be faster as a GHC plugin:

  • Not having to re-lint modules that haven’t changed

    This is nice (especially when your codebase has 6000+ modules like ours). When you turn hlint into a GHC plugin you only run it whenever GHC recompiles a module and you don’t have to run hlint over your entire codebase after every change.

    However, this was actually not a significant benefit to our company because we already have some scripts which take care of only running hlint on the modules that have changed (according to git). However, it’s still a “nice to have” because it’s architecturally simpler (no need to write that clever script if GHC can take care of detecting changes for us).

  • Not having to parse the Haskell code twice

    This is likely a minor performance improvement since parsing is (in my experience) typically not the bottleneck for compiling Haskell code.

  • Running hlint while GHC is compiling modules

    What I mean by this is that if hlint is a GHC plugin then it can begin running while the GHC build is ongoing! In large builds (like ours) there are often a large number of cores that go unused and the hlint plugin could potentially exploit those idle cores to do work before the build is done.

    However, in practice this benefit did not pan out and our build didn't really get faster when we enabled hlint-plugin. The time it took to build our codebase with the plugin was essentially the same amount of time as running hlint in a separate step.

Future directions

The hlint-source-plugin repository notes that if hlint were implemented as a GHC plugin (which it now is) then it would fix some of the hacks that hlint has to use:

Currently this plugin simply hooks into the parse stage and calls HLint with a file path. This means HLint will re-parse all source code. The next logical step is to use the actual parse tree, as given to us by GHC, and HLint that. This means that HLint can lose the special logic to run CPP, along with the hacky handling of fixity resolution (we get that done correctly by GHC’s renaming phase).

… because of this I sort of feel that hlint really should be a GHC plugin. It’s understandable why hlint was not initially implemented in this way (since I believe the GHC plugin system didn’t exist back then), but now it sort of feels like a GHC plugin is a much more natural way of integrating hlint.


  1. I refuse to call parentheses “brackets”.↩︎

  2. Note that this is a plugin for haskell-language-server, which is a different type of plugin than a GHC plugin. A haskell-language-server plugin only works with haskell-language-server whereas a GHC plugin works with anything that uses GHC. The two types of plugins are also installed and set up in different ways.↩︎

  3. Note that this is a plugin for haskell-language-server, which is a different type of plugin than a GHC plugin. A haskell-language-server plugin only works with haskell-language-server whereas a GHC plugin works with anything that uses GHC. The two types of plugins are also installed and set up in different ways.↩︎

  4. I tried to create a working stack.yaml and failed to get it working, but I’d accept a pull request adding a working stack build if someone else has better luck than I did.↩︎

by Gabriella Gonzalez (noreply@blogger.com) at September 19, 2023 09:16 PM

Well-Typed.Com

ZuriHac 2023 and GHC Contributors' Workshop: Summary and Materials

Many of us at Well-Typed enjoyed travelling to Zurich earlier this year for ZuriHac 2023 and the GHC Contributors’ Workshop. Thanks to the organisers of both these events, and to all those who attended for the great discussions.

The videos of our sessions as well as the materials used in them are available online, so those who could not attend can watch the talks by following the links below.

ZuriHac Workshop: Lazy Evaluation by Andres Löh

Watch video on YouTube | Slides | Code repository

As in previous years, Well-Typed were happy to support ZuriHac by offering a free training workshop:

In this workshop, we are going to take a deep dive into lazy evaluation, looking at several examples and reasoning about how they get evaluated. The goal is to develop a strong understanding of how Haskell’s evaluation strategy works. Hopefully, we will see why laziness is a compelling idea with a lot of strong points, while also learning how some common sources of space leaks can be avoided.

The workshop will be accessible to anyone who has mastered the basics of Haskell and is looking to understand the language in more depth, whether they are a student or professional developer. We are not going to use any advanced features of the language, and you do not have to be a Haskell expert to attend!

Check out the ZuriHac 2023 YouTube playlist for more great talks from ZuriHac.

GHC Contributors’ Workshop

The GHC Contributors’ Workshop was organised by the Haskell Foundation as a way to introduce people to GHC development. As sponsors of the Haskell Foundation and regular contributors to GHC, Well-Typed sent several speakers to the event:

The GHC Contributor’s Workshop playlist has all the talks from a variety of other GHC contributors. You can also read the retrospective report from David Christiansen.

More from Well-Typed

Why not check out the Haskell Unfolder, our YouTube series about all things Haskell? Or you can find talks presented by all our consultants by subscribing to our YouTube Channel.

If you are interested in our courses or other services, check our Training page, Consultancy page, or just send us an email.

by christine, adam, andres, ben, duncan, sam, zubin at September 19, 2023 12:00 AM

GHC Developer Blog

GHC 9.8.1-alpha4 is now available

GHC 9.8.1-alpha4 is now available

bgamari - 2023-09-19

The GHC developers are very pleased to announce the availability of the fourth alpha prerelease of GHC 9.8.1. Binary distributions, source distributions, and documentation are available at downloads.haskell.org.

GHC 9.8 will bring a number of new features and improvements, including:

  • Preliminary support the TypeAbstractions language extension, allowing types to be bound in type declarations.

  • Support for the ExtendedLiterals extension, providing more consistent support for non-word-sized numeric literals in the surface language

  • Improved rewrite rule matching behavior, allowing limited matching of higher-order patterns

  • Better support for user-defined warnings by way of the WARNING pragma

  • The introduction of the new GHC.TypeError.Unsatisfiable constraint, allowing more predictable user-defined type errors

  • Implementation of the export deprecation proposal, allowing module exports to be marked with DEPRECATE pragmas

  • The addition of build semaphore support for parallel compilation, allowing better use of parallelism across GHC builds

  • More efficient representation of info table provenance information, reducing binary sizes by nearly 80% in some cases when -finfo-table-map is in use

A full accounting of changes can be found in the release notes. This alpha includes roughly 40 new commits relative to alpha 3, including what we believe should be nearly the last changes to GHC’s boot libraries.

We would like to thank GitHub, IOG, the Zw3rk stake pool, Well-Typed, Tweag I/O, Serokell, Equinix, SimSpace, the Haskell Foundation, and other anonymous contributors whose on-going financial and in-kind support has facilitated GHC maintenance and release management over the years. Finally, this release would not have been possible without the hundreds of open-source contributors whose work comprise this release.

As always, do give this release a try and open a ticket if you see anything amiss.

by ghc-devs at September 19, 2023 12:00 AM

September 16, 2023

Magnus Therning

Setting up emacs-openai/chatgpt

Yesterday I decided to try to make more use of the ChatGPT account I have. What prompted it mostly was that I recalled that my employer has a paid subscription and that if we use it enough they'll get us access to ChatGPT4.

After a bit of research1 I decided to start with emacs-open/chatgpg. However, as I found the instructions slightly lacking I'm sharing my setup.

The instructions for straight.el fail to mention that one needs the openai package too.

(use-package openai
  :straight (openai :type git :host github :repo "emacs-openai/openai"))

The complete declaration for use-package ended up looking like this:

(use-package chatgpt
  :straight (chatgpt :type git :host github :repo "emacs-openai/chatgpt")
  :requires openai
  :config
  (setq openai-key #'openai-key-auth-source))

Oh, and don't forget to put an entry into `~/.authinfo.gpg`. Something like this should do it

machine api.openai.com login <anything> password <your key>

Footnotes:

1

I found Alex Kehayias' note to be a good starting point.

September 16, 2023 04:22 AM

September 13, 2023

Mark Jason Dominus

Horizontal and vertical complexity

Note: The jumping-off place for this article is a conference talk which I did not attend. You should understand this article as rambling musings on related topics, not as a description of the talk or a response to it or a criticism of it or as a rebuttal of its ideas.


A co-worker came back from PyCon reporting on a talk called “Wrapping up the Cruft - Making Wrappers to Hide Complexity”. He said:

The talk was focussed on hiding complexity for educational purposes. … The speaker works for an educational organisation … and provided an example of some code for blinking lights on a single board machine. It was 100 lines long, you had to know about a bunch of complexity that required you to have some understanding of the hardware, then an example where it was the initialisation was wrapped up in an import and for the kids it was as simple as selecting a colour and which led to light up. And how much more readable the code was as a result.

The better we can hide how the sausage is made the more approachable and easier it is for those who build on it to be productive. I think it's good to be reminded of this lesson.

I was on fully board with this until the last bit, which gave me an uneasy feeling. Wrapping up code this way reduces horizontal complexity in that it makes the top level program shorter and quicker. But it increases vertical complexity because there are now more layers of function calling, more layers of interface to understand, and more hidden magic behavior. When something breaks, your worries aren't limited to understanding what is wrong with your code. You also have to wonder about what the library call is doing. Is the library correct? Are you calling it correctly? The difficulty of localizing the bug is larger, and when there is a problem it may be in some module that you can't see, and that you may not know exists.

Good interfaces successfuly hide most of this complexity, but even in the best instances the complexity has only been hidden, and it is all still there in the program. An uncharitable description would be that the complexity has been swept under the carpet. And this is the best case! Bad interfaces don't even succeed in hiding the complexity, which keeps leaking upward, like a spreading stain on that carpet, one that warns of something awful underneath.

Advice about how to write programs bangs the same drum over and over and over:

  • Reduce complexity
  • Do the simplest thing that could possibly work
  • You ain't gonna need it
  • Explicit is better than implicit

But here we have someone suggesting the opposite. We should be extremely wary.

There is always a tradeoff. Leaky abstractions can increase the vertical complexity by more than they decrease the horizontal complexity. Better-designed abstractions can achieve real wins.

It’s a hard, hard problem. That’s why they pay us the big bucks.

Ratchet effects

This is a passing thought that I didn't consider carefully enough to work into the main article.

A couple of years ago I wrote an article called Creeping featurism and the ratchet effect about how adding features to software, or adding more explanations to the manual, is subject to a “ratcheting force”. The benefit of the change is localized and easy to imagine:

You can imagine a confused person in your head, someone who happens to be confused in exactly the right way, and who is miraculously helped out by the presence of the right two sentences in the exact right place.

But the cost of the change is that the manual is now a tiny bit larger. It doesn't affect any specific person. But it imposes a tiny tax on everyone who uses the manual.

Similarly adding a feature to software has an obvious benefit, so there's pressure to add more features, and the costs are hidden, so there's less pressure in the opposite direction.

And similarly, adding code and interfaces and libraries to software has an obvious benefit: look how much smaller the top-level code has become! But the cost, that the software is 0.0002% more complex, is harder to see. And that cost increases imperceptibly, but compounds exponentially. So you keep moving in the same direction, constantly improving the software architecture, until one day you wake up and realize that it is unmaintainable. You are baffled. What could have gone wrong?

Kent Beck says, “design isn't free”.

Anecdote

The original article is in the context of a class for beginners where the kids just want to make the LEDs light up. If I understand the example correctly, in this context I would probably have made the same choice for the same reason.

But I kept thinking of an example where I made the opposite choice. I taught an introduction to programming in C class about thirty years ago. The previous curriculum had considered pointers an advanced topic and tried to defer them to the middle of the semester. But the author of the curriculum had had a big problem: you need pointers to deal with scanf. What to do?

The solution chosen by the previous curriculum was to supply the students with a library of canned input functions like

    int get_int(void);   /* Read an integer from standard input */

These used scanf under the hood. (Under the carpet, one might say.) But all the code with pointers was hidden.

I felt this was a bad move. Even had the library been a perfect abstraction (it wasn't) and completely bug-free (it wasn't) it would still have had a giant flaw: Every minute of time the students spent learning to use this library was a minute wasted on something that would never be of use and that had no intrinsic value. Every minute of time spent on this library was time that could have been spent learning to use pointers! People programming in C will inevitably have to understand pointers, and will never have to understand this library.

My co-worker from the first part of this article wrote:

The better we can hide how the sausage is made the more approachable and easier it is for those who build on it to be productive.

In some educational contexts, I think this is a good idea. But not if you are trying to teach people sausage-making!

by Mark Dominus (mjd@plover.com) at September 13, 2023 01:48 PM

Well-Typed.Com

The Haskell Unfolder Episode 11: Haskell at ICFP

Today, 2023-09-13, at 1830 UTC (11:30 am PDT, 2:30 pm EDT, 7:30 pm BST, 20:30 CEST, …) we are streaming the eleventh episode of the Haskell Unfolder live on YouTube.

The Haskell Unfolder Episode 11: Haskell at ICFP

In this episode, Andres and Edsko will talk about Edsko’s visit to ICFP (the International Conference on Functional Programming), the Haskell Symposium, and HIW (the Haskell Implementors’ Workshop) from 4-9 September 2023 in Seattle. We will highlight a few select papers from these events.

About the Haskell Unfolder

The Haskell Unfolder is a YouTube series about all things Haskell hosted by Edsko de Vries and Andres Löh, with episodes appearing approximately every two weeks. All episodes are live-streamed, and we try to respond to audience questions. All episodes are also available as recordings afterwards.

We have a GitHub repository with code samples from the episodes.

And we have a public Google calendar (also available as ICal) listing the planned schedule.

by andres, edsko at September 13, 2023 12:00 AM

September 12, 2023

Chris Reade

Graphs, Kites and Darts – and Theorems

We continue our exploration of properties of Penrose’s aperiodic tilings with kites and darts using Haskell and Haskell Diagrams.

In this blog we discuss some interesting properties we have discovered concerning the \small\texttt{decompose}, \small\texttt{compose}, and \small\texttt{force} operations along with some proofs.

Index

  1. Quick Recap (including operations \small\texttt{compose}, \small\texttt{force}, \small\texttt{decompose} on Tgraphs)
  2. Composition Problems and a Compose Force Theorem (composition is not a simple inverse to decomposition)
  3. Perfect Composition Theorem (establishing relationships between \small\texttt{compose}, \small\texttt{force}, \small\texttt{decompose})
  4. Multiple Compositions (extending the Compose Force theorem for multiple compositions)
  5. Proof of the Compose Force Theorem (showing \small\texttt{compose} is total on forced Tgraphs)

1. Quick Recap

Haskell diagrams allowed us to render finite patches of tiles easily as discussed in Diagrams for Penrose tiles. Following a suggestion of Stephen Huggett, we found that the description and manipulation of such tilings is greatly enhanced by using planar graphs. In Graphs, Kites and Darts we introduced a specialised planar graph representation for finite tilings of kites and darts which we called Tgraphs (tile graphs). These enabled us to implement operations that use neighbouring tile information and in particular operations \small\texttt{decompose}, \small\texttt{compose}, and \small\texttt{force}.

For ease of reference, we reproduce the half-tiles we are working with here.

Figure 1: Half-tile faces
Figure 1: Half-tile faces

Figure 1 shows the right-dart (RD), left-dart (LD), left-kite (LK) and right-kite (RK) half-tiles. Each has a join edge (shown dotted) and a short edge and a long edge. The origin vertex is shown red in each case. The vertex at the opposite end of the join edge from the origin we call the opp vertex, and the remaining vertex we call the wing vertex.

If the short edges have unit length then the long edges have length \phi (the golden ratio) and all angles are multiples of 36^{\circ} (a tenth turn) with kite halves having a two 2s and a 1, and dart halves having a 3 and two 1s. This geometry of the tiles is abstracted away from at the graph representation level but used when checking validity of tile additions and by the drawing functions.

There are rules for how the tiles can be put together to make a legal tiling (see e.g. Diagrams for Penrose tiles). We defined a Tgraph (in Graphs, Kites and Darts) as a list of such half-tiles which are constrained to form a legal tiling but must also be connected with no crossing boundaries (see below).

As a simple example consider kingGraph (2 kites and 3 darts round a king vertex). We represent each half-tile as a TileFace with three vertex numbers, then apply makeTgraph to the list of ten Tilefaces. The function makeTgraph :: [TileFace] -> Tgraph performs the necessary checks to ensure the result is a valid Tgraph.

kingGraph :: Tgraph
kingGraph = makeTgraph 
  [LD (1,2,3),RD (1,11,2),LD (1,4,5),RD (1,3,4),LD (1,10,11)
  ,RD (1,9,10),LK (9,1,7),RK (9,7,8),RK (5,7,1),LK (5,6,7)
  ]

To view the Tgraph we simply form a diagram (in this case 2 diagrams horizontally separated by 1 unit)

  hsep 1 [drawjLabelled kingGraph, draw kingGraph]

and the result is shown in figure 2 with labels and dashed join edges (left) and without labels and join edges (right).

Figure 2: kingGraph with labels and dashed join edges (left) and without (right).
Figure 2: kingGraph with labels and dashed join edges (left) and without (right).

The boundary of the Tgraph consists of the edges of half-tiles which are not shared with another half-tile, so they go round untiled/external regions. The no crossing boundary constraint (equivalently, locally tile-connected) means that a boundary vertex has exactly two incident boundary edges and therefore has a single external angle in the tiling. This ensures we can always locally determine the relative angles of tiles at a vertex. We say a collection of half-tiles is a valid Tgraph if it constitutes a legal tiling but also satisfies the connectedness and no crossing boundaries constraints.

Our key operations on Tgraphs are \small\texttt{decompose}, \small\texttt{force}, and \small\texttt{compose} which are illustrated in figure 3.

Figure 3: decompose, force, and compose
Figure 3: decompose, force, and compose

Figure 3 shows the kingGraph with its decomposition above it (left), the result of forcing the kingGraph (right) and the composition of the forced kingGraph (bottom right).

Decompose

An important property of Penrose dart and kite tilings is that it is possible to divide the half-tile faces of a tiling into smaller half-tile faces, to form a new (smaller scale) tiling.

Figure 4: Decomposition of (left) half-tiles
Figure 4: Decomposition of (left) half-tiles

Figure 4 illustrates the decomposition of a left-dart (top row) and a left-kite (bottom row). With our Tgraph representation we simply introduce new vertices for dart and kite long edges and kite join edges and then form the new faces using these. This does not involve any geometry, because that is taken care of by drawing operations.

Force

Figure 5 illustrates the rules used by our \small\texttt{force} operation (we omit a mirror-reflected version of each rule).

Figure 5: Force rules
Figure 5: Force rules

In each case the yellow half-tile is added in the presence of the other half-tiles shown. The yellow half-tile is forced because, by the legal tiling rules, there is no choice for adding a different half-tile on the edge where the yellow tile is added.

We call a Tgraph correct if it represents a tiling which can be continued infinitely to cover the whole plane without getting stuck, and incorrect otherwise. Forcing involves adding half-tiles by the illustrated rules round the boundary until either no more rules apply (in which case the result is a forced Tgraph) or a stuck tiling is encountered (in which case an incorrect Tgraph error is raised). Hence \small\texttt{force} is a partial function but total on correct Tgraphs.

Compose: This is discussed in the next section.

2. Composition Problems and a Theorem

Compose Choices

For an infinite tiling, composition is a simple inverse to decomposition. However, for a finite tiling with boundary, composition is not so straight forward. Firstly, we may need to leave half-tiles out of a composition because the necessary parts of a composed half-tile are missing. For example, a half-dart with a boundary short edge or a whole kite with both short edges on the boundary must necessarily be excluded from a composition. Secondly, on the boundary, there can sometimes be a problem of choosing whether a half-dart should compose to become a half-dart or a half-kite. This choice in composing only arises when there is a half-dart with its wing on the boundary but insufficient local information to determine whether it should be part of a larger half-dart or a larger half-kite.

In the literature (see for example 1 and 2) there is an often repeated method for composing (also called inflating). This method always make the kite choice when there is a choice. Whilst this is a sound method for an unbounded tiling (where there will be no choice), we show that this is an unsound method for finite tilings as follows.

Clearly composing should preserve correctness. However, figure 6 (left) shows a correct Tgraph which is a forced queen, but the kite-favouring composition of the forced queen produces the incorrect Tgraph shown in figure 6 (centre). Applying our \small\texttt{force} function to this reveals a stuck tiling and reports an incorrect Tgraph.

Figure 6: An erroneous and a safe composition
Figure 6: An erroneous and a safe composition

Our algorithm (discussed in Graphs, Kites and Darts) detects dart wings on the boundary where there is a choice and classifies them as unknowns. Our composition refrains from making a choice by not composing a half dart with an unknown wing vertex. The rightmost Tgraph in figure 6 shows the result of our composition of the forced queen with the half-tile faces left out of the composition (the remainder faces) shown in green. This avoidance of making a choice (when there is a choice) guarantees our composition preserves correctness.

Compose is a Partial Function

A different composition problem can arise when we consider Tgraphs that are not decompositions of Tgraphs. In general, \small\texttt{compose} is a partial function on Tgraphs.

Figure 7: Composition may fail to produce a Tgraph
Figure 7: Composition may fail to produce a Tgraph

Figure 7 shows a Tgraph (left) with its sucessful composition (centre) and the half-tile faces that would result from a second composition (right) which do not form a valid Tgraph because of a crossing boundary (at vertex 6). Thus composition of a Tgraph may fail to produce a Tgraph when the resulting faces are disconnected or have a crossing boundary.

However, we claim that \small\texttt{compose} is a total function on forced Tgraphs.

Compose Force Theorem

Theorem: Composition of a forced Tgraph produces a valid Tgraph.

We postpone the proof (outline) for this theorem to section 5. Meanwhile we use the result to establish relationships between \small\texttt{compose}, \small\texttt{force}, and \small\texttt{decompose} in the next section.

3. Perfect Composition Theorem

In Graphs, Kites and Darts we produced a diagram showing relationships between multiple decompositions of a dart and the forced versions of these Tgraphs. We reproduce this here along with a similar diagram for multiple decompositions of a kite.

Figure 8: Commuting Diagrams
Figure 8: Commuting Diagrams

In figure 8 we show separate (apparently) commuting diagrams for the dart and for the kite. The bottom rows show the decompositions, the middle rows show the result of forcing the decompositions, and the top rows illustrate how the compositions of the forced Tgraphs work by showing both the composed faces (black edges) and the remainder faces (green edges) which are removed in the composition. The diagrams are examples of some commutativity relationships concerning \small\texttt{force}, \small\texttt{compose} and \small\texttt{decompose} which we will prove.

It should be noted that these diagrams break down if we consider only half-tiles as the starting points (bottom right of each diagram). The decomposition of a half-tile does not recompose to its original, but produces an empty composition. So we do not even have g = (\small\texttt{compose} \cdot \small\texttt{decompose}) \  g in these cases. Forcing the decomposition also results in an empty composition. Clearly there is something special about the depicted cases and it is not merely that they are wholetile complete because the decompositions are not wholetile complete. [Wholetile complete means there are no join edges on the boundary, so every half-tile has its other half.]

Below we have captured the properties that are sufficient for the diagrams to commute as in figure 8. In the proofs we use a partial ordering on Tgraphs (modulo vertex relabelling) which we define next.

Partial ordering of Tgraphs

If g_0 and g_1 are both valid Tgraphs and g_0 consists of a subset of the (half-tile) faces of g_1 we have

\displaystyle g_0 \subseteq g_1

which gives us a partial order on Tgraphs. Often, though, g_0 is only isomorphic to a subset of the faces of g_1, requiring a vertex relabelling to become a subset. In that case we write

\displaystyle g_0 \sqsubseteq g_1

which is also a partial ordering and induces an equivalence of Tgraphs defined by

\displaystyle g_0 \equiv g_1  \text{ if and only if } g_0 \sqsubseteq g_1 \text{ and } g_1 \sqsubseteq g_0

in which case g_0 and g_1 are isomorphic as Tgraphs.

Both \small\texttt{compose} and \small\texttt{decompose} are monotonic with respect to \sqsubseteq meaning:

\displaystyle  g_0 \sqsubseteq g_1 \text{ implies } \small\texttt{compose} \ g_0 \sqsubseteq \small\texttt{compose} \ g_1 \text{ and } \small\texttt{decompose} \ g_0 \sqsubseteq \small\texttt{decompose} \ g_1

We also have \small\texttt{force} is monotonic, but only when restricted to correct Tgraphs. Also, when restricted to correct Tgraphs, we have \small\texttt{force} is non decreasing because it only adds faces:

\displaystyle  g  \sqsubseteq \small\texttt{force} \ g

and \small\texttt{force} is idempotent (forcing a forced correct Tgraph leaves it the same):

\displaystyle  (\small\texttt{force} \cdot \small\texttt{force}) \ g  \equiv \small\texttt{force} \ g

Composing perfectly and perfect compositions

Definition: A Tgraph g composes perfectly if all faces of g are composable (i.e there are no remainder faces of g when composing).

We note that the composed faces must be a valid Tgraph (connected with no crossing boundaries) if all faces are included in the composition because g has those properties. Clearly, if g composes perfectly then

\displaystyle (\small\texttt{decompose} \cdot \small\texttt{compose}) \  g \equiv g

In general, for arbitrary g where the composition is defined, we only have

\displaystyle (\small\texttt{decompose} \cdot \small\texttt{compose}) \  g \sqsubseteq g

Definition: A Tgraph g' is a perfect composition if \small\texttt{decompose} \  g' composes perfectly.

Clearly if g' is a perfect composition then

\displaystyle (\small\texttt{compose} \cdot \small\texttt{decompose}) \  g' \equiv g'

(We could use equality here because any new vertex labels introduced by \small\texttt{decompose} will be removed by \small\texttt{compose}). In general, for arbitrary g',

\displaystyle (\small\texttt{compose} \cdot \small\texttt{decompose}) \  g' \sqsubseteq g'

Lemma 1: g' is a perfect composition if and only if g' has the following 2 properties:

  1. every half-kite with a boundary join has either a half-dart or a whole kite on the short edge, and
  2. every half-dart with a boundary join has a half-kite on the short edge,

(Proof outline:) Firstly note that unknowns in g (= \small\texttt{decompose} \  g') can only come from boundary joins in g'. The properties 1 and 2 guarantee that g has no unknowns. Since every face of g has come from a decomposed face in g', there can be no faces in g that will not recompose, so g will compose perfectly to g'. Conversely, if g' is a perfect composition, its decomposition g can have no unknowns. This implies boundary joins in g' must satisfy properties 1 and 2. \square

(Note: a perfect composition g' may have unknowns even though its decomposition g has none.)

It is easy to see two special cases:

  1. If g' is wholetile complete then g' is a perfect composition.

    Proof: Wholetile complete implies no boundary joins which implies properties 1 and 2 in lemma 1 which implies g' is a perfect composition. \square

  2. If g' is a decomposition then g' is a perfect composition.

    Proof: If g' is a decomposition, then every half-dart has a half-kite on the short edge which implies property 2 of lemma 1. Also, any half-kite with a boundary join in g' must have come from a decomposed half-dart since a decomposed half-kite produces a whole kite with no boundary kite join. So the half-kite must have a half-dart on the short edge which implies property 1 of lemma 1. The two properties imply g' is a perfect composition. \square

We note that these two special cases cover all the Tgraphs in the bottom rows of the diagrams in figure 8. So the Tgraphs in each bottom row are perfect compositions, and furthermore, they all compose perfectly except for the rightmost Tgraphs which have empty compositions.

In the following results we make the assumption that a Tgraph is correct, which guarantees that when \small\texttt{force} is applied, it terminates with a correct Tgraph. We also note that \small\texttt{decompose} preserves correctness as does \small\texttt{compose} (provided the composition is defined).

Lemma 2: If g_f is a forced, correct Tgraph then

\displaystyle (\small\texttt{compose} \cdot \small\texttt{force} \cdot \small\texttt{decompose}) \  g_f \equiv g_f

(Proof outline:) The proof uses a case analysis of boundary and internal vertices of g_f. For internal vertices we just check there is no change at the vertex after (\small\texttt{compose} \cdot \small\texttt{force} \cdot \small\texttt{decompose}) using figure 11 (plus an extra case for the forced star). For boundary vertices we check local contexts similar to those depicted in figure 10 (but including empty composition cases). This reveals there is no local change of the boundary at any boundary vertex, and since this is true for all boundary vertices, there can be no global change. (We omit the full details). \square

Lemma 3: If g' is a perfect composition and a correct Tgraph, then

\displaystyle \small\texttt{force} \  g' \sqsubseteq (\small\texttt{compose} \cdot \small\texttt{force} \cdot \small\texttt{decompose}) \  g'

(Proof outline:) The proof is by analysis of each possible force rule applicable on a boundary edge of g' and checking local contexts to establish that (i) the result of applying (\small\texttt{compose} \cdot \small\texttt{force} \cdot \small\texttt{decompose}) to the local context must include the added half-tile, and (ii) if the added half tile has a new boundary join, then the result must include both halves of the new half-tile. The two properties of perfect compositions mentioned in lemma 1 are critical for the proof. However, since the result of adding a single half-tile may break the condition of the Tgraph being a pefect composition, we need to arrange that half-tiles are completed first then each subsequent half-tile addition is paired with its wholetile completion. This ensures the perfect composition condition holds at each step for a proof by induction. [A separate proof is needed to show that the ordering of applying force rules makes no difference to a final correct Tgraph (apart from vertex relabelling)]. \square

Lemma 4 If g composes perfectly and is a correct Tgraph then

\displaystyle \small\texttt{force} \ g \equiv (\small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{force} \cdot \small\texttt{compose})\ g

Proof: Assume g composes perfectly and is a correct Tgraph. Since \small\texttt{force} is non-decreasing (with respect to \sqsubseteq on correct Tgraphs)

\displaystyle \small\texttt{compose} \  g \sqsubseteq (\small\texttt{force} \cdot \small\texttt{compose}) \  g

and since \small\texttt{decompose} is monotonic

\displaystyle (\small\texttt{decompose} \cdot \small\texttt{compose}) \  g \sqsubseteq (\small\texttt{decompose} \cdot \small\texttt{force} \cdot \small\texttt{compose}) \  g

Since g composes perfectly, the left hand side is just g, so

\displaystyle g \sqsubseteq (\small\texttt{decompose} \cdot \small\texttt{force} \cdot \small\texttt{compose}) \  g

and since \small\texttt{force} is monotonic (with respect to \sqsubseteq on correct Tgraphs)

\displaystyle (*) \ \ \ \  \ \small\texttt{force} \  g \sqsubseteq (\small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{force} \cdot \small\texttt{compose}) \  g

For the opposite direction, we substitute \small\texttt{compose} \  g for g' in lemma 3 to get

\displaystyle (\small\texttt{force} \cdot \small\texttt{compose}) \  g \sqsubseteq (\small\texttt{compose} \cdot \small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{compose}) \  g

Then, since (\small\texttt{decompose} \cdot \small\texttt{compose}) \  g \equiv g, we have

\displaystyle (\small\texttt{force} \cdot \small\texttt{compose}) \  g \sqsubseteq (\small\texttt{compose} \cdot \small\texttt{force}) \  g

Apply \small\texttt{decompose} to both sides (using monotonicity)

\displaystyle (\small\texttt{decompose} \cdot \small\texttt{force} \cdot \small\texttt{compose}) \  g \sqsubseteq (\small\texttt{decompose} \cdot \small\texttt{compose} \cdot \small\texttt{force}) \  g

For any g'' for which the composition is defined we have (\small\texttt{decompose} \cdot \small\texttt{compose})\ g'' \sqsubseteq g'' so we get

\displaystyle (\small\texttt{decompose} \cdot \small\texttt{force} \cdot \small\texttt{compose}) \  g \sqsubseteq \small\texttt{force} \  g

Now apply \small\texttt{force} to both sides and note (\small\texttt{force} \cdot \small\texttt{force})\ g \equiv \small\texttt{force} \ g to get

\displaystyle (\small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{force} \cdot \small\texttt{compose}) \  g \sqsubseteq \small\texttt{force} \  g

Combining this with (*) above proves the required equivalence. \square

Theorem (Perfect Composition): If g composes perfectly and is a correct Tgraph then

\displaystyle (\small\texttt{compose} \cdot \small\texttt{force}) \  g \equiv (\small\texttt{force} \cdot \small\texttt{compose}) \  g

Proof: Assume g composes perfectly and is a correct Tgraph. By lemma 4 we have

\displaystyle \small\texttt{force} \ g \equiv (\small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{force} \cdot \small\texttt{compose})\ g

Applying \small\texttt{compose} to both sides, gives

\displaystyle (\small\texttt{compose} \cdot \small\texttt{force}) \ g \equiv (\small\texttt{compose} \cdot \small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{force} \cdot \small\texttt{compose})\ g

Now by lemma 2, with g_f = (\small\texttt{force} \cdot \small\texttt{compose}) \  g, the right hand side is equivalent to

\displaystyle (\small\texttt{force} \cdot \small\texttt{compose}) \  g

which establishes the result. \square

Corollaries (of the perfect composition theorem):

  1. If g' is a perfect composition and a correct Tgraph then

    \displaystyle  \small\texttt{force} \  g' \equiv (\small\texttt{compose} \cdot \small\texttt{force} \cdot \small\texttt{decompose}) \  g'

    Proof: Let g' = \small\texttt{compose} \  g (so g \equiv \small\texttt{decompose} \  g') in the theorem. \square

    [This result generalises lemma 2 because any correct forced Tgraph g_f is necessarily wholetile complete and therefore a perfect composition, and \small\texttt{force} \ g_f \equiv g_f.]

  2. If g' is a perfect composition and a correct Tgraph then

    \displaystyle  (\small\texttt{decompose} \cdot \small\texttt{force}) \  g' \sqsubseteq (\small\texttt{force} \cdot \small\texttt{decompose}) \  g'

    Proof: Apply \small\texttt{decompose} to both sides of the previous corollary and note that

    \displaystyle  (\small\texttt{decompose} \cdot \small\texttt{compose}) \  g'' \sqsubseteq g'' \textit{ for any } g''

    provided the composition is defined, which it must be for a forced Tgraph by the Compose Force theorem. \square

  3. If g' is a perfect composition and a correct Tgraph then

    \displaystyle  (\small\texttt{force} \cdot \small\texttt{decompose}) \  g' \equiv (\small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{force}) \  g'

    Proof: Apply \small\texttt{force} to both sides of the previous corollary noting \small\texttt{force} is monotonic and idempotent for correct Tgraphs

    \displaystyle  (\small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{force}) \  g' \sqsubseteq (\small\texttt{force} \cdot \small\texttt{decompose}) \  g'

    From the fact that \small\texttt{force} is non decreasing and \small\texttt{decompose} and \small\texttt{force} are monotonic, we also have

    \displaystyle  (\small\texttt{force} \cdot \small\texttt{decompose}) \  g' \sqsubseteq (\small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{force}) \  g'

    Hence combining these two sub-Tgraph results we have

    \displaystyle  (\small\texttt{force} \cdot \small\texttt{decompose}) \  g' \equiv (\small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{force}) \  g'

    \square

It is important to point out that if g is a correct Tgraph and \small\texttt{compose} \  g is a perfect composition then this is not the same as g composes perfectly. It could be the case that g has more faces than (\small\texttt{decompose} \cdot \small\texttt{compose}) \  g and so g could have unknowns. In this case we can only prove that

\displaystyle  (\small\texttt{force} \cdot \small\texttt{compose}) \  g \sqsubseteq (\small\texttt{compose} \cdot \small\texttt{force}) \  g

As an example where this is not an equivalence, choose g to be a star. Then its composition is the empty Tgraph (which is still a pefect composition) and so the left hand side is the empty Tgraph, but the right hand side is a sun.

Perfectly composing generators

The perfect composition theorem and lemmas and the three corollaries justify all the commuting implied by the diagrams in figure 8. However, one might ask more general questions like: Under what circumstances do we have (for a correct forced Tgraph g_f)

\displaystyle  (\small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{compose}) \  g_f \equiv g_f

Definition A generator of a correct forced Tgraph g_f is any Tgraph g such that g \sqsubseteq g_f and \small\texttt{force} \ g \equiv g_f.

We can now state that

Corollary If a correct forced Tgraph g_f has a generator which composes perfectly, then

\displaystyle  (\small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{compose}) \  g_f \equiv g_f

Proof: This follows directly from lemma 4 and the perfect composition theorem. \square

As an example where the required generator does not exist, consider the rightmost Tgraph of the middle row in figure 9. It is generated by the Tgraph directly below it, but it has no generator with a perfect composition. The Tgraph directly above it in the top row is the result of applying (\small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{compose}) which has lost the leftmost dart of the Tgraph.

Figure 9: A Tgraph without a perfectly composing generator
Figure 9: A Tgraph without a perfectly composing generator

We could summarise this section by saying that \small\texttt{compose} can lose information which cannot be recovered by a subsequent \small\texttt{force} and, similarly, \small\texttt{decompose} can lose information which cannot be recovered by a subsequent \small\texttt{force}. We have defined perfect compositions which are the Tgraphs that do not lose information when decomposed and Tgraphs which compose perfectly which are those that do not lose information when composed. Forcing does the same thing at each level of composition (that is it commutes with composition) provided information is not lost when composing.

4. Multiple Compositions

We know from the Compose Force theorem that the composition of a Tgraph that is forced is always a valid Tgraph. In this section we use this and the results from the last section to show that composing a forced, correct Tgraph produces a forced Tgraph.

First we note that:

Lemma 5: The composition of a forced, correct Tgraph is wholetile complete.

Proof: Let g' = \small\texttt{compose} \  g_f where g_f is a forced, correct Tgraph. A boundary join in g' implies there must be a boundary dart wing of the composable faces of g_f. (See for example figure 4 where this would be vertex 2 for the half dart case, and vertex 5 for the half-kite face). This dart wing cannot be an unknown as the half-dart is in the composable faces. However, a known dart wing must be either a large kite centre or a large dart base and therefore internal in the composable faces of g_f (because of the force rules) and therefore not on the boundary in g'. This is a contradiction showing that g' can have no boundary joins and is therefore wholetile complete. \square

Theorem: The composition of a forced, correct Tgraph is a forced Tgraph.

Proof: Let g' = \small\texttt{compose} \  g_f for some forced, correct Tgraph g_f, then g' is wholetile complete (by lemma 5) and therefore a perfect composition. Let g = \small\texttt{decompose} \  g', so g composes perfectly (g' \equiv \small\texttt{compose} \  g). By the perfect composition theorem we have

\displaystyle (**) \ \ \ \  \ (\small\texttt{compose} \cdot \small\texttt{force}) \  g \equiv (\small\texttt{force} \cdot \small\texttt{compose}) \  g \equiv \small\texttt{force} \  g'

We also have

\displaystyle  g = \small\texttt{decompose} \  g' = (\small\texttt{decompose} \cdot \small\texttt{compose}) \  g_f \sqsubseteq g_f

Applying \small\texttt{force} to both sides, noting that \small\texttt{force} is monotonic and the identity on forced Tgraphs, we have

\displaystyle  \small\texttt{force} \  g \sqsubseteq \small\texttt{force} \  g_f \equiv g_f

Applying \small\texttt{compose} to both sides, noting that \small\texttt{compose} is monotonic, we have

\displaystyle  (\small\texttt{compose} \cdot \small\texttt{force}) \  g \sqsubseteq \small\texttt{compose} \  g_f \equiv g'

By (**) above, the left hand side is equivalent to \small\texttt{force} \  g' so we have

\displaystyle  \small\texttt{force} \  g' \sqsubseteq g'

but since we also have (\small\texttt{force} being non-decreasing)

\displaystyle  g' \sqsubseteq \small\texttt{force} \  g'

we have established that

\displaystyle  g' \equiv \small\texttt{force} \  g'

which means g' is a forced Tgraph. \square

This result means that after forcing once we can repeatedly compose creating valid Tgraphs until we reach the empty Tgraph.

We can also use lemma 5 to establish the converse to a previous corollary:

Corollary If a correct forced Tgraph g_f satisfies:

\displaystyle  (\small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{compose}) \  g_f \equiv g_f

then g_f has a generator which composes perfectly.

Proof: By lemma 5, \small\texttt{compose} \ g_f is wholetile complete and hence a perfect composition. This means that (\small\texttt{decompose} \cdot \small\texttt{compose}) \ g_f composes perfectly and it is also a generator for g_f because

\displaystyle  (\small\texttt{force} \cdot \small\texttt{decompose} \cdot \small\texttt{compose}) \  g_f \equiv g_f

\square

5. Proof of the Compose Force theorem

Theorem (Compose Force): Composition of a forced Tgraph produces a valid Tgraph.

Proof: For any forced Tgraph we can construct the composed faces. For the result to be a valid Tgraph we need to show no crossing boundaries and connectedness for the composed faces. These are proved separately by case analysis below.

Proof of no crossing boundaries

Assume g_f is a forced Tgraph and that it has a non-empty set of composed faces (we can ignore cases where the composition is empty as the empty Tgraph is valid). Consider a vertex v in the composed faces of g_f and first take the case that v is on the boundary of g_f . We consider the possible local contexts for a vertex v on a forced Tgraph boundary and the nature of the composed faces at v in each case.

Figure 10: Forced Boundary Vertex Contexts
Figure 10: Forced Boundary Vertex Contexts

Figure 10 shows local contexts for a boundary vertex v in a forced Tgraph where the composition is non-empty. In each case v is shown as a red dot, and the composition is shown filled yellow. The cases for v are shown in rows: the first row is for dart origins, the second row is for kite origins, the next two rows are for kite wings, and the last two rows are for kite opps. The dart wing cases are a subset of the kite opp cases, so not repeated, and dart opp vertices are excluded because they cannot be on the boundary of a forced Tgraph. We only show left-hand versions, so there is a mirror symmetric set for right-hand versions.

It is easy to see that there are no crossing boundaries of the composed faces at v in each case. Since any boundary vertex of any forced Tgraph (with a non-empty composition) must match one of these local context cases around the vertex, we can conclude that a boundary vertex of g_f cannot become a crossing boundary in compose \  g_f.

Next take the case where v is an internal vertex of g_f .

Figure 11: Vertex types and their relationships
Figure 11: Vertex types and their relationships

Figure 11 shows relationships between the forced Tgraphs of the 7 (internal) vertex types (plus a kite at the top right). The red faces are those around the vertex type and the black faces are those produced by forcing (if any). Each forced Tgraph has its composition directly above with empty compositions for the top row. We note that a (forced) star, jack, king, and queen vertex remains an internal vertex in the respective composition so cannot become a crossing boundary vertex. A deuce vertex becomes the centre of a larger kite and is no longer present in the composition (top right). That leaves cases for the sun vertex and ace vertex (=fool vertex). The sun Tgraph (sunGraph) and fool Tgraph (fool) consist of just the red faces at the respective vertex (shown top left and top centre). These both have empty compositions when there is no surrounding context. We thus need to check possible forced local contexts for sunGraph and fool.

The fool case is simple and similar to a duece vertex in that it is never part of a composition. [To see this consider inverting the decomposition arrows shown in figure 4. In both cases we see the half-dart opp vertex (labelled 4 in figure 4) is removed].

For the sunGraph there are only 7 local forced context cases to consider where the sun vertex is on the boundary of the composition.

Figure 12: Forced Contexts for a sun vertex v where v is on the composition boundary
Figure 12: Forced Contexts for a sun vertex v where v is on the composition boundary

Six of these are shown in figure 12 (the missing one is just a mirror reflection of the fourth case). Again, the relevant vertex v is shown as a red dot and the composed faces are shown filled yellow, so it is easy to check that there is no crossing boundary of the composed faces at v in each case. Every forced Tgraph containing an internal sun vertex where the vertex is on the boundary of the composition must match one of the 7 cases locally round the vertex.

Thus no vertex from g_f can become a crossing boundary vertex in the composed faces and since the vertices of the composed faces are a subset of those of g_f, we can have no crossing boundary vertex in the composed faces.

Proof of Connectedness

Assume g_f is a forced Tgraph as before. We refer to the half-tile faces of g_f that get included in the composed faces as the composable faces and the rest as the remainder faces. We want to prove that the composable faces are connected as this will imply the composed faces are connected.

As before we can ignore cases where the set of composable faces is empty, and assume this is not the case. We study the nature of the remainder faces of g_f. Firstly, we note:

Lemma (remainder faces)

The remainder faces of g_f are made up entirely of groups of half-tiles which are either:

  1. Half-fools (= a half dart and both halves of the kite attached to its short edge) where the other half-fool is entirely composable faces, or
  2. Both halves of a kite with both short edges on the (g_f) boundary (so they are not part of a half-fool) where only the origin is in common with composable faces, or
  3. Whole fools with just the shared kite origin in common with composable faces.
Figure 13: Remainder face groups (cases 1,2, and 3)
Figure 13: Remainder face groups (cases 1,2, and 3)

These 3 cases of remainder face groups are shown in figure 13. In each case the border in common with composable faces is shown yellow and the red edges are necessarily on the boundary of g_f (the black boundary could be on the boundary of g_f or shared with another reamainder face group). [A mirror symmetric version for the first group is not shown.] Examples can be seen in e.g. figure 12 where the first Tgraph has four examples of case 1, and two of case 2, the second has six examples of case 1 and two of case 2, and the fifth Tgraph has an example of case 3 as well as four of case 1. [We omit the detailed proof of this lemma which reasons about what gets excluded in a composition after forcing. However, all the local context cases are included in figure 14 (left-hand versions), where we only show those contexts where there is a non-empty composition.]

We note from the (remainder faces) lemma that the common boundary of the group of remainder faces with the composable faces (shown yellow in figure 13) is just a single vertex in cases 2 and 3. In case 1, the common boundary is just a single edge of the composed faces which is made up of 2 adjacent edges of the composable faces that constitute the join of two half-fools.

This means each (remainder face) group shares boundary with exactly one connected component of the composable faces.

Next we establish that if two (remainder face) groups are connected they must share boundary with the same connected component of the composable faces. We need to consider how each (remainder face) group can be connected with a neighbouring such group. It is enough to consider forced contexts of boundary dart long edges (for cases 1 and 3) and boundary kite short edges (for case 2). The cases where the composition is non-empty all appear in figure 14 (left-hand versions) along with boundary kite long edges (middle two rows) which are not relevant here.

Figure 14: Forced contexts for boundary edges
Figure 14: Forced contexts for boundary edges

We note that, whenever one group of the remainder faces (half-fool, whole-kite, whole-fool) is connected to a neighbouring group of the remainder faces, the common boundary (shared edges and vertices) with the compososable faces is also connected, forming either 2 adjacent composed face boundary edges (= 4 adjacent edges of the composable faces), or a composed face boundary edge and one of its end vertices, or a single composed face boundary vertex.

It follows that any connected collection of the remainder face groups shares boundary with a unique connected component of the composable faces. Since the collection of composable and remainder faces together is connected (g_f is connected) the removal of the remainder faces cannot disconnect the composable faces. For this to happen, at least one connected collection of remainder face groups would have to be connected to more than one connected component of composable faces.

This establishes connectedness of any composition of a forced Tgraph, and this completes the proof of the Compose Force theorem. \square

References

[1] Martin Gardner (1977) MATHEMATICAL GAMES. Scientific American, 236(1), (pages 110 to 121). http://www.jstor.org/stable/24953856

[2] Grünbaum B., Shephard G.C. (1987) Tilings and Patterns. W. H. Freeman and Company, New York. ISBN 0-7167-1193-1 (Hardback) (pages 540 to 542).

by readerunner at September 12, 2023 01:33 PM

September 10, 2023

Mark Jason Dominus

The Killer Whale Dagger

Last month Toph and I went on vacation to Juneau, Alaska. I walked up to look at the glacier, but mostly we just enjoyed the view and the cool weather. But there were some surprises.

The Killer Whale dagger against a black background.  It is shiny steel with copper overlay and leather wrapping about the grip area. The blade is a long, tapered triangular form with three prominent flutes down the center of its length. The integral steel pommel is relief-formed into the image of two orca whale heads looking outward with a single dorsal fin extending upward from the whale heads. A single cut hole pierces the dorsal fin. The pommel is flat on the reverse side

One day we took a cab downtown, and our driver, Edwell John, asked where we were visiting from, as cab drivers do. We said we were from Philadelphia, and he told us he had visited Philadelphia himself.

“I was repatriating Native artifacts,” he said.

Specifically, he had gone to the University of Pennsyvania Museum, to take back the Killer Whale Dagger named Keet Gwalaa. This is a two foot long dagger that was forged by Tlingit people in the 18th century from meteorite steel.

This picture comes from the Penn Museum. (I think this isn't the actual dagger, but a reproduction they had made after M. John took back the original.)

This was very exciting! I asked “where is the dagger now?” expecting that it had been moved to a museum in Angoon or something.

“Oh, I have it,” he said.

I was amazed. “What, like on you?”

“No, at my house. I'm the clan leader of the Killer Whale clan.”

Then he took out his phone and showed us a photo of himself in his clan leader garb, carrying the dagger.

Here's an article about M. John visiting the Smithsonian to have them 3-D scan the Killer Whale hat. Then the Smithsonian had a replica hat made from the scan.


by Mark Dominus (mjd@plover.com) at September 10, 2023 05:04 PM

Magnus Therning

Using emacs for the scrollback in terminal multiplexers

I should start with saying that I still don't really know if this is a good idea or not, but it feels like it's worth trying out at least.

An irritating limitation in Zellij, and a possible solution

After seeing it mentioned in an online community I thought it might be worth trying out. I'm not really disappointed with tmux, I've been using it for years but I actually only use a small part of what it can do. I create tabs, sometimes create panes, and I regularly use the scollback functionality to copy output of commands I've run earlier.

Unfortunately, as is reported in a ticket, Zellij can't select and copy using the keyboard. From the discussion in that ticket it seems unlikely it ever will be able to. After finding that out I resigned to staying with Tmux – I'm not ready to go back to using a pointing device to select and copy text in my terminal!

When I was biking to the pool yesterday I realised a thing though: I'm already using a tool that is very good at manipulating text using the keyboard. Of course I'm talking about Emacs! So if I can just make Emacs start up quickly enough I ought to be able to use it for searching, selecting and copying from the scrollback buffer. I haven't found a way to do this in Tmu= yet, but Zellij has EditScrollBack so I can at least try it out.

A nice benefit is that I can cut back on the number of different shortcuts I use daily.

My slimmed down Emacs config

My current Emacs config starts up in less than 2s, which is good enough as I normally start Emacs at most a few times per day. However, if I have to wait 2s to open the scrollback buffer I suspect I'll tire very quickly and abandon the experiment. So I took my ordinary config and slimmed it down. Cutting away things that weren't related to navigation and searching.

The list of packages is not very long:

  • straight
  • use-package
  • evil
  • general
  • which-key
  • vertico
  • orderless
  • marginalia
  • consult

The first 5 are for usability in general, and the last 4 are bring in the functions I use for searching through text.

The resulting config starts in less than ¼s. That's more than acceptable, I find.

Copying to the Wayland clipboard

It turns out that copying with `y` in evil does the right thing by default and the copied text ends up in the clipboard without any special configuration.

From Tmux I'm used to being thrown out of copy-mode after copying something. While that's sometimes irritating, it's at other times exactly what I want. Given that evil-yank does the right thing it was easy to write a function for it:

(defun se/yank-n-kill (beg end)
  (interactive "r")
  (evil-yank beg end)
  (kill-emacs))

The config files

I'm keeping my dot-files in a private repo, but I put a snapshot of the Emacs and Zellij config in a snippet.

September 10, 2023 11:17 AM

Michael Snoyman

Owned values and Futures in Rust

Let's write a simple tokio-powered program that will download the contents of an HTTP response body using reqwest and print it to stdout. We'll take the URL to download on the command line using clap. This might look something like the following:

use anyhow::Result;
use clap::Parser;

#[derive(clap::Parser)]
struct Opt {
    url: String,
}

#[tokio::main]
async fn main() -> Result<()> {
    let Opt { url } = Opt::parse();
    let body = reqwest::get(url).await?.text().await?;
    println!("{body}");
    Ok(())
}

All good, but let's (arguably) improve our program by extracting the logic for the download-and-print to a helper function:

#[tokio::main]
async fn main() -> Result<()> {
    let Opt { url } = Opt::parse();
    download_and_print(&url).await?;
    Ok(())
}

async fn download_and_print(url: &str) -> Result<()> {
    let body = reqwest::get(url).await?.text().await?;
    println!("{body}");
    Ok(())
}

I've followed general best practices here and taken the url as a string slice instead of an owned string. Now, it's really easy to extend this program to support multiple URLs:

#[derive(clap::Parser)]
struct Opt {
    urls: Vec<String>,
}

#[tokio::main]
async fn main() -> Result<()> {
    let Opt { urls } = Opt::parse();
    for url in urls {
        download_and_print(&url).await?;
    }
    Ok(())
}

But now, let's kick it up a notch and introduce some parallelism. We're going to use a JoinSet to allow us to spawn off a separate task per URL provided and wait on all of them returning. If anything fails along the way, we'll exit the entire program and abort ongoing activities.

#[tokio::main]
async fn main() -> Result<()> {
    let Opt { urls } = Opt::parse();
    let mut set = tokio::task::JoinSet::new();

    for url in urls {
        set.spawn(download_and_print(&url));
    }

    while let Some(result) = set.join_next().await {
        match result {
            Ok(Ok(())) => (),
            Ok(Err(e)) => {
                set.abort_all();
                return Err(e);
            }
            Err(e) => {
                set.abort_all();
                return Err(e.into());
            }
        }
    }
    Ok(())
}

While the parallelism going on here is OK, the spawning of the new tasks themselves fails:

error[E0597]: `url` does not live long enough
  --> src/main.rs:15:38
   |
14 |     for url in urls {
   |         --- binding `url` declared here
15 |         set.spawn(download_and_print(&url));
   |                   -------------------^^^^-
   |                   |                  |
   |                   |                  borrowed value does not live long enough
   |                   argument requires that `url` is borrowed for `'static`
16 |     }
   |     - `url` dropped here while still borrowed

This is a common failure mode in async (and, for that matter, multithreaded) Rust. The issue is that the String we want to pass is owned by the main task, and we're trying to pass a reference to it with no guarantee that the main task will outlive the child task. You might argue that the main task will always outlive all other tasks, but (1) there's no static proof of that within the code, and (2) it's entirely possible to slightly refactor this program so that the spawning occurs in a subtask instead.

The question is: how do you fix this compile time error? We'll explore a few options.

Take a String

Arguably the simplest solution is to change the type of the download_and_print function so that it takes an owned String instead of a reference:

async fn download_and_print(url: String) -> Result<()> {
    let body = reqwest::get(url).await?.text().await?;
    println!("{body}");
    Ok(())
}

Now, at the call site, we're no longer borrowing a reference to the main task's String. Instead, we pass in the entire owned value, transferring ownership to the newly spawned task:

for url in urls {
    // Note the lack of & here!
    set.spawn(download_and_print(url));
}

On the one hand, this feels dirty. We're violating best practices and taking an owned String where one isn't needed. However, this may be considered a small price to pay for the code simply working. However, if the download_and_print`` function will be used in other parts of the code base where passing a reference will work fine, forcing an owned String`` will cause an unnecessary allocation for those use cases, and we may want to look for a better solution.

Adjust the callsite with async move

Another possibility is to leave our download_and_print function as-is taking a reference, and modify our call site as follows:

for url in urls {
    set.spawn(async move { download_and_print(&url).await });
}

By introducing an async move block, what we've done is created a new Future that will be passed to set.spawn. That new Future itself owns the String, not the main task. Therefore, borrowing a reference to url and passing it to download_and_print works just fine.

This is a great solution when you're using a library function that you cannot modify, or when most of your code doesn't run into this lifetime issue. But it can be a bit tedious to have to rewrite code in this way.

impl AsRef

Our final approach today will be to modify the function to accept a more general url type:

async fn download_and_print(url: impl AsRef<str>) -> Result<()> {
    let body = reqwest::get(url.as_ref()).await?.text().await?;
    println!("{body}");
    Ok(())
}

This type means "I'll accept anything that can be converted into a &str." This will work for an owned String as well as a string slice, leaving the decision entirely to the caller. If we leave our call site as passing in a reference, we'll still get the lifetime error above. But if instead we pass in url directly, our program once again works.

This is the approach I'd probably recommend in general. It takes a bit of practice to get used to these impl AsRef parameters, but the payoff is worth it in my opinion.

Improvements

The code above is not perfect. I'm sure others will find other limitations, but two things that jump out at me are:

  1. Instead of using reqwest::get, we should be creating a single reqwest::Client and sharing it throughout the application.
  2. For a large number of incoming URLs, we wouldn't want to spawn a separate task per URL, but instead have a fixed number of workers and have them all pop work items from a shared queue. This would help with avoiding rate limiting from servers and from overwhelming our application. But the number of URLs we'd have to be requesting would need to be pretty high to run into either of these issues in practice.

Fortunately, both of these are relatively easy to implement thanks to the simplicity of the JoinSet API:

use anyhow::Result;
use async_channel::Receiver;
use clap::Parser;

#[derive(clap::Parser)]
struct Opt {
    urls: Vec<String>,
    #[clap(long, default_value_t = 8)]
    workers: usize,
}

#[tokio::main]
async fn main() -> Result<()> {
    let Opt { urls, workers } = Opt::parse();
    let mut set = tokio::task::JoinSet::new();
    let client = reqwest::Client::new();
    let (send, recv) = async_channel::bounded(workers * 2);

    set.spawn(async move {
        for url in urls {
            send.send(url).await?;
        }
        Ok(())
    });

    for _ in 0..workers {
        set.spawn(worker(client.clone(), recv.clone()));
    }

    while let Some(result) = set.join_next().await {
        match result {
            Ok(Ok(())) => (),
            Ok(Err(e)) => {
                set.abort_all();
                return Err(e);
            }
            Err(e) => {
                set.abort_all();
                return Err(e.into());
            }
        }
    }
    Ok(())
}

async fn worker(client: reqwest::Client, recv: Receiver<String>) -> Result<()> {
    while let Ok(url) = recv.recv().await {
        download_and_print(&client, &url).await?;
    }
    Ok(())
}

async fn download_and_print(client: &reqwest::Client, url: impl AsRef<str>) -> Result<()> {
    let body = client.get(url.as_ref()).send().await?.text().await?;
    println!("{body}");
    Ok(())
}

September 10, 2023 12:00 AM

September 09, 2023

Mark Jason Dominus

My favorite luxurious office equipment is low-tech

Cheap wooden back scratcher hanging on a hook

This is about the stuff I have in my office that I could live without but wouldn't want to. Not stuff like “a good chair” because a good chair is not optional. And not stuff like “paper”. This is the stuff that you might not have thought about already.

The back scratcher at right cost me about $1 and brings me joy every time I use it. My back is itchy, it is distracting me from work, aha, I just grab the back scratcher off the hook and the problem is solved in ten seconds. Not only is it a sensual pleasure, but also I get the satisfaction of a job done efficiently and effectively.

Computer programmers often need to be reminded that the cheap, simple, low-tech solution is often the best one. Perfection is achieved not when there is nothing more to add, but when there is nothing more to take away. I see this flawlessly minimal example of technology every time I walk into my office and it reminds me of the qualities I try to put into my software.

These back scratchers are available everywhere. If your town has a dollar store or an Asian grocery, take a look. I think the price has gone up to $2.


When I was traveling a lot for ZipRecruiter, I needed a laptop stand. (Using a laptop without a stand is bad for your neck.) I asked my co-workers for recommendations and a couple of them said that the Roost was nice. It did seem nice, but it cost $75. So I did Google search for “laptop stand like Roost but cheap” and this is what I found.

black laptop stand on the floor.  It is three X'es, two parallel and about twelve inches apart, with arms to hold up the laptop, and one in the front to hold the other two together

This is a Nexstand. The one in this picture is about ten years old. It has performed flawlessly. It has never failed. There has never been any moment when I said “ugh, this damn thing again, always causing problems.”

The laptop stand folded up into a compact square rod about 14 inches long and two inches across.

It folds up and deploys in seconds.

It weighs eight ounces. That's 225 grams.

It takes up the smallest possible amount of space in my luggage. Look at the picture at left. LOOK AT IT I SAY.

The laptop height is easily adjustable.

The Nexstand currently sells for $25–35. (The Roost is up to $90.)

This is another “there is nothing left to take away” item. It's perfect the way it is. This picture shows it quietly doing its job with no fuss, as it does every day.

Laptop stand on my desk, supporting an unusually large laptop.


This last item has changed my life. Not drastically, but significantly, and for the better.

The Vobaga mug warmer is a flat black thing with a cord coming out of the back.  On top is a flat circular depression with the warning “CAUTION: HOT SURFACE”.  On the front edge are two round buttons, one blue and one red.

This is a Vobaga electric mug warmer. You put your mug on it, and the coffee or tea or whatever else is in the mug stays hot, but not too hot to drink, indefinitely.

The button on the left turns the power on and off. The button on the right adjusts the temperature: blue for warm, purple for warmer, and red for hot. (The range is 104–149°F (40–65°C). I like red.) After you turn off the power, the temperature light blinks for a while to remind you not to put your hand on it.

That is all it does, it is not programmable, it is not ⸢smart⸣, it does not require configuration, it does not talk to the Internet, it does not make any sounds, it does not spy on me, it does not have a timer, it does do one thing and it does it well, and I never have to drink lukewarm coffee.

The power cord is the only flaw, because it plugs into wall power and someone might trip on it and spill your coffee, but it is a necessary flaw. You can buy a mug warmer that uses USB power. When I first looked into mug warmers I was puzzled. Surely, I thought, a USB connection does not deliver enough power to keep a mug of coffee warm? At the time, this was correct. USB 2 can deliver 5V at up to 0.5A, a total of 2.5 watts of power. That's only 0.59 calorie per second. Ridiculous. The Vobaga can deliver 20 watts. That is enough.

Vobaga makes this in several colors (not that anything is wrong with black) and it costs around $25–30. The hot round thing is 4 inches in diameter (10 cm) and neatly fits all my mugs, even the big ones. It does not want to go in the dishwasher but easily wipes clean with a damp cloth. I once spilled the coffee all over it but it worked just fine once it dried out because it is low tech.

It's just another one of those things that works, day in and day out, without my having to think about it, unless I feel like gloating about how happy it makes me.

[ Addendum: I have no relationship with any of these manufacturers except as a satisfied customer of their awesome products. Do I really need to say that? ]

by Mark Dominus (mjd@plover.com) at September 09, 2023 02:02 PM

September 05, 2023

Lysxia's blog

Abstract nonsense

I’ve been reading The Joy of Abstraction, by Eugenia Cheng. Very accessible. Would recommend. It’s doing good stuff to my mind.


Abstraction, food for thought

Two apples are the same as two apples.

Two apples are not the same as two oranges.

Two ripe apples are not the same as two rotten apples, even though they are both two apples and two apples.

Two fruits are the same as two fruits, even though they could be two apples and two oranges.

by Lysxia at September 05, 2023 12:00 AM

September 03, 2023

Gabriella Gonzalez

Applicatives should usually implement Semigroup and Monoid

lift-monoid

The gist of this post is that any type constructor F that implements Applicative:

instance Applicative F where

… should usually also implement the following Semigroup and Monoid instances:

instance Semigroup a => Semigroup (F a) where
    (<>) = liftA2 (<>)

instance Monoid a => Monoid (F a) where
    mempty = pure mempty

… which one can also derive using the Data.Monoid.Ap type, which was created for this purpose:

deriving (Semigroup, Monoid) via (Ap F a)

Since each type constructor that implements Monad also implements Applicative, this recommendation also applies for all Monads, too.

Why are these instances useful?

The above instances come in handy in conjunction with utilities from Haskell’s standard library that work with Monoids.

For example, a common idiom I see when doing code review is something like this:

instance Monad M where


example :: M [B]
example = do
    let process :: A -> M [B]
        process a = do

            return bs

    let inputs :: [A]
        inputs =

    bss <- mapM process inputs

    return (concat bss)

… but if you implemented the suggested Semigroup and Monoid instances then you could replace this:

    bss <- mapM process inputs

    return (concat bss)

… with this:

    foldMap process inputs

These instances also come in handy when you need to supply an empty action or empty handler for some callback.

For example, the lsp package provides a sendRequest utility which has the following type:

sendRequest
    :: MonadLsp config f
    => SServerMethod m
    -> MessageParams m
    -> (Either ResponseError (ResponseResult m) -> f ())
    -- ^ This is the callback function
    -> f (LspId m)

I won’t go into too much detail about what the type means other than to point out that this function lets a language server send a request to the client and then execute a callback function when the client responds. The callback function you provide has type:

Either ResponseError (ResponseResult m) -> f ()

Sometimes you’re not interested in the client’s response, meaning that you want to supply an empty callback that does nothing. Well, if the type constructor f implements the suggested Monoid instance then the empty callback is: mempty.

mempty :: Either ResponseError (ResponseResult m) -> f ()

… and this works because of the following three Monoid instances that are automatically chained together by the compiler:

instance Monoid ()

-- The suggested Monoid instance that `f` would ideally provide
instance Monoid a => Monoid (f a)

instance Monoid b => Monoid (a -> b)

In fact, certain Applicative/Monad-related utilites become special cases of simpler Monoid-related utilities once you have this instance. For example:

  • You can sometimes replace traverse_ / mapM_ with the simpler foldMap utility

    Specifically, if you specialize the type of traverse_ / mapM_ to:

    traverse_ :: (Foldable t, Applicative f) => (a -> f ()) -> t a -> f ()
    mapM_     :: (Foldable t, Monad       f) => (a -> f ()) -> t a -> f ()

    … then foldMap behaves the same way when the Applicative f implements the suggested instances:

    foldMap :: (Foldable t, Monoid m) => (a -> m) -> t a -> m
  • You can sometimes replace sequenceA_ / sequence_ with the simpler fold utility

    Specifically, if you specialize the type of sequenceA_ / sequence_ to:

    sequenceA_ :: (Foldable t, Applicative f) -> t (f ()) -> f ()
    sequence_  :: (Foldable t, Monad       f) -> t (f ()) -> f ()

    … then fold behaves the same way when the Applicative f implements the’ suggested instances:

    fold :: (Foldable t, Monoid m) => t m -> m
  • You can sometimes replace replicateM_ with mtimesDefault

    Specifically, if you specialize the type of replicateM_ to:

    replicateM_ :: Applicative f => Int -> f () -> f ()

    … then mtimesDefault behaves the same way when the Applicative f implements the suggested instances:

    mtimesDefault :: Monoid m => Int -> m -> m

And you also gain access to new functionality which doesn’t currently exist in Control.Monad. For example, the following specializations hold when f implements the suggested instances:

-- This specialization is similar to the original `foldMap` example
fold :: Applicative f => [f [b]] -> f [b]

-- You can combine two handlers into a single handler
(<>) :: Applicative f => (a -> f ()) -> (a -> f ()) -> (a -> f ())

-- a.k.a. `pass` in the `relude` package
mempty :: Applicative f => f ()

When should one not do this?

You sometimes don’t want to implement the suggested Semigroup and Monoid instances when other law-abiding instances are possible. For example, sometimes the Applicative type constructor permits a different Semigroup and Monoid instance.

The classic example is lists, where the Semigroup / Monoid instances behave like list concatenation. Also, most of the exceptions that fall in this category are list-like, in the sense that they use the Semigroup / Monoid instances to model some sort of element-agnostic concatenation.

I view these “non-lifted” Monoid instances as a missed opportunity, because these same type constructors will typically also implement the exact same behavior for their Alternative instance, too, like this:

instance Alternative SomeListLikeType where
    empty = mempty

    (<|>) = (<>)

… which means that you have two instances doing the exact same thing, when one of those instances could have potentially have been used to support different functionality. I view the Alternative instance as the more natural instance for element-agnostic concatenation since that is the only behavior the Alternative class signature permits. By process of elimination, the Monoid and Semigroup instances should in principle be reserved for the “lifted” implementation suggested by this post.

However, I also understand it would be far too disruptive at this point to change these list-like Semigroup and Monoid instances and expectations around them, so I think the pragmatic approach is to preserve the current Haskell ecosystem conventions, even if they strike me as less elegant.

Why not use Ap exclusively?

The most commonly cited objection to these instances is that you technically don’t need to add these lifted Semigroup and Monoid instances because you can access them “on the fly” by wrapping expressions in the Ap newtype before combining them.

For example, even if we didn’t have a Semigroup and Monoid instance, we could still write our original example using foldMap, albeit with more newtype-coercion boilerplate:

    fmap getAp (foldMap process (map Ap inputs))

… or perhaps using the newtype package on Hackage:

    ala Ap foldMap process inputs

This solution is not convincing to me for a few reasons:

  • It’s unergonomic in general

    There are some places where Ap works just fine (such as in conjunction with deriving via), but typically using Ap directly within term-level code is a solution worse than the original problem; the newtype wrapping and unwrapping boilerplate more than counteracts the ergonomic improvements from using the Semigroup / Monoid instances.

  • In my view, there’s no downside to adding Semigroup and Monoid instances

    … when only one law-abiding implementation of these instances is possible. See the caveat in the previous section.

  • This line of reasoning would eliminate many other useful instances

    For example, one might remove the Applicative instance for list since it’s not the only possible instance and you could in theory always use a newtype to select the desired instance.

Proof of laws

For completeness, I should also mention that the suggested Semigroup and Monoid instances are guaranteed to always be law-abiding instances. You can find the proof in Appendix B of my Equational reasoning at scale post.

by Gabriella Gonzalez (noreply@blogger.com) at September 03, 2023 03:56 PM

September 02, 2023

Sandy Maguire

Certainty by Construction Progress Report 9

The following is a progress report for Certainty by Construction, a new book I’m writing on learning and effectively wielding Agda. Writing a book is a tedious and demoralizing process, so if this is the sort of thing you’re excited about, please do let me know!


It is now the wee hours of Sept 2, and it’s safe to say I did not make the deadline. The book is not yet finished come hell or high water. Damn. Here’s the state of the world:

  • Everything up to page 203/296 has been aggressively edited, in terms of prose, code, general presentation, and overall topic order. There are still a few TODOs to write chapter summaries, but those aren’t the end of the world if they don’t happen.
  • It’s now possible to build semi-readable epubs. Needing to run everything through the Agda compiler makes build pipelines surprisingly hard, but I think this should only require a couple of hours to get it into a good place.
  • I have commissioned a contest of potential covers for the book; no results yet, but I expect to to have some things to look at by the end of this week.
  • Since my last update, I realized I had accidentally lost the chapter on ring solving when doing my big refactor. I’ve since found it, but it’s no longer particularly motivated and is rather out of place, so I think it’s going to get cut. Kill your darlings and all that.

All in all, I’m bummed I didn’t make the deadline, but the quality of the book is exponentially better, so I think it’s a worthwhile trade. I’ve got three/four chapters left to edit (depending on if ring solving gets cut), and I need to write a closing chapter to make the end less jarring.

On a personal note, although the book is much longer in content than my other books, it’s packed much tighter and thus is going to be physically smaller when I get it printed. For some reason that is holding a lot of space in my head right now, and steering me away from cutting too much. I suppose I shouldn’t fret too much; there’s still an index and glossary I need to add which will probably add a bit of length. Also I know this doesn’t matter, but I care about it nevertheless.

So why didn’t I get this done on time? The reason seems to be just that it was too ambitious a goal. I definitely underestimated the amount of polish required here. This month I put 65 hours of honest-to-goodness work into the book, which if you measure in terms of the 2.9h average hours of work that an officer worker does in a day is more than a full time job. It’s very late and I don’t know if that makes sense but I think it might.

Anyway, here’s the plan going forwards—I’ve got some of this week to work on the book before getting married and starting grad school. The goal is to just keep on at this pace for as long as I possibly can until I die or real life gets in the way. It sucks and I’m exhausted and would like to be finished with this thing, but it’s not done with me yet. And so we go on.

But maybe I’ll take tomorrow off because I need to sort out getting married, and I don’t think this kind of extreme focus is good for my mental health. It’s a bit of a balancing act though, because life is only going to get more busy after next week.

Sorry for the bleak trail off here. I should go to bed.

September 02, 2023 12:00 AM

September 01, 2023

Well-Typed.Com

Well Typed collaborates with the Haskell Community to support HLS development

Well-Typed has been collaborating with the Haskell Language Server (HLS) development team thanks to funding from Mercury, Hasura and the HLS Open Collective to support HLS development. This includes work on performance, release management and support for newer GHC releases, as well as taking advantage of new GHC features such as Multiple Home Units and serializing Core to improve performance.

HLS releases

HLS is heavily tied to GHC and the GHC API, so to use HLS with a Haskell project, you must use an HLS executable compiled with the exact same GHC that you use to compile your project. Not doing so can often lead to strange and inscrutable errors.1 Thus HLS and the GHCup installation tool distribute a matrix of HLS binaries that exactly match the corresponding GHC binaries installed by GHCup. To ensure that HLS binaries corresponding to new GHC releases are promptly available for developers to use, we must provide new HLS releases whenever new major or minor releases of GHC are made.

Thanks to the generous donations of the Haskell Community to the HLS Open Collective and in collaboration with Julian Ospald and the HLS maintainers, Well-Typed has been working to ensure that new HLS releases promptly follow the GHC release cycle, that the release CI responsible for actually producing HLS binary distributions is robust and easy to use, and that the metadata necessary for GHCup to provide HLS binaries is updated along with the release.

This has involved producing HLS releases including the 1.7.0.0, 1.8.0.0, 1.9.0.0, 1.10.0.0 and 2.0.0.0 HLS releases. For the future, we hope to encourage volunteers to create more releases, while ensuring that someone from Well-Typed is always available as a backup to ensure timely releases.

HLS support for new major GHC versions

HLS is a big and complex project with many sub packages and dependencies, and is a prominent client of the notoriously unstable GHC API. As such, it can be quite a task to upgrade it to work new GHC versions.

As part of our work on HLS, we also use it as a staging ground for features that eventually make their way into GHC, such as serializing Core to improve performance. We also implement GHC features to make HLS work better (such as Multiple Home Units). These also require adapting so that HLS can work seamlessly with newer or older GHC versions.

Well-Typed has worked to upgrade HLS to have support for the GHC 9.2, 9.4 and 9.6 release series. We have also added ghcide (the core of HLS) to head.hackage to ensure that it is kept up to date with changes in GHC, and to make future porting efforts easier.

This work was supported by a combination of funding from the HLS Open Collective (for 9.4 support for HLS plugins, in particular adapting to various ghc-exactprint changes), Hasura (for 9.2 and 9.4 and Multiple Home Unit support) and Mercury (for 9.6 support).

Of course, this work would be impossible without the help of the many volunteers who contribute to HLS as well as all the maintainers and contributors to the many packages and dependencies that HLS relies upon.

Performance and memory usage improvements

We previously implemented improvements to recompilation avoidance and startup time as part of our ongoing work for Mercury. Recently, we have made efforts to reduce the memory usage of HLS so that it is more feasible to use it on larger projects and memory constrained systems.

While HLS builds on the GHC API, it faces a different set of design constraints than a compiler like GHC which is traditionally used in batch mode. Rather than being a fire and forget program, HLS sessions are typically quite long lasting. Additionally, HLS is optimised for interactive use, aiming to provide low latency results to assist developers in a timely fashion as they write their programs. It is also heavily incremental, resuing old results as much as possible rather than restarting compilation from scratch, and making use of out of date results to provide low latency information even at the cost of not always being strictly correct.

As such, it needs to keep track of a lot of additional information for these purposes that a typical GHC or even GHCi session does not, which has a cost in terms of memory usage. The memory usage of HLS has been a bone of contention for a while on large projects, which is why Mercury asked us to investigate and reduce the memory usage.

Through a combination of profiling HLS using info table profiling along with careful investigation using ghc-debug, we have managed to significantly reduce the memory used by HLS, and make further improvements to the startup time. Check out Finley’s recent post on reducing Haddock’s memory usage for an introduction to the techniques involved.

Some of these improvements included:

As a result of all this work, the heap usage of HLS on Mercury’s codebase went from 12 GB down to 7 GB when starting from scratch (with no disk cache), and from 8 GB down to 3.5 GB when starting with a warm disk cache. Moreover, startup times went from over 4 minutes to around 30 seconds.

Future work

In GHC 9.4, we added support for Multiple Home Units to the GHC API, allowing you to load multiple packages simultaneously into a single GHC API session. Recently, Matthew worked on Cabal to allow loading multiple components into a single GHCi session. We have a work-in-progress PR to allow HLS to exploit this feature so that users can work seamlessly across multiple packages in a single HLS session.

We plan to:

  • continue investigating and improving HLS memory usage, performance and usability,

  • help upgrade HLS to newer GHC versions where needed, and

  • support the volunteer community in promptly providing HLS releases corresponding to GHC releases.

Many thanks to Hasura, Mercury and the donors to the HLS Open Collective for their support in making these improvements possible, and to the whole community of Haskell developers on whose volunteer efforts this work depends.

If you would like to support this work, please consider contributing to the HLS Open Collective. Alternatively, Well-Typed are always interested in projects and looking for funding to improve GHC, HLS, Cabal and other Haskell tools. Please contact info@well-typed.com if we might be able to work with you!


  1. Inscrutable errors such as RTS panics or segfaults, especially when Template Haskell is involved.↩︎

by zubin at September 01, 2023 12:00 AM

August 30, 2023

Well-Typed.Com

The Haskell Unfolder Episode 10: generalBracket

Today, 2023-08-30, at 1830 UTC (11:30 am PDT, 2:30 pm EDT, 7:30 pm BST, 20:30 CEST, …) we are streaming the tenth episode of the Haskell Unfolder live on YouTube.

The Haskell Unfolder Episode 10: generalBracket

Exception handling is difficult, especially in the presence of asynchronous exceptions. In this episode we will revise the basics of bracket and why it’s so important. We will then discuss its generalisation generalBracket and its application in monad stacks.

About the Haskell Unfolder

The Haskell Unfolder is a YouTube series about all things Haskell hosted by Edsko de Vries and Andres Löh, with episodes appearing approximately every two weeks. All episodes are live-streamed, and we try to respond to audience questions. All episodes are also available as recordings afterwards.

We have a GitHub repository with code samples from the episodes.

And we have a public Google calendar (also available as ICal) listing the planned schedule.

by andres, edsko at August 30, 2023 12:00 AM

Oleg Grenrus

Using cabal-install's dependency solver as a SAT solver!?

Posted on 2023-08-30 by Oleg Grenrus

Dependency resolution in Haskell ecosystem is a hard computational problem. While I'm unsure how hard problem is to picking individual package versions without any additional features, selecting assignment of automatic package flags seems to be very hard: it seems we encode arbitrary boolean satisfiablity problems, SAT, into automatic package flag selection problem.

Real world flag selection problems are easy. Yet, I wanted to try how well cabal-install's solver copes with problems it haven't been tuned for.

Boolean satisfiability problems

From Wikipedia:

In logic and computer science, the Boolean satisfiability problem is the problem of determining if there exists an interpretation that satisfies a given Boolean formula.

The problems are given to solvers in conjunctive normal form:

(x₁ ∨ x₂) ∧ (x₃ ∨ x₄)

and solvers job is to find an assignment making the formula true. In the example above there are many solutions, e.g. setting all variables to true.

Sudoku

One of go to examples of what you can do with a SAT solvers is solving sudoku puzzles.

Our running example will be a very simple 2×2 sudoku puzzle.

┌─────┬─────┐
│   1 │ 4   │
│   3 │   2 │
├─────┼─────┤
│     │ 3   │
│     │     │
└─────┴─────┘

Problem encoding is somewhat of art form, but for sudoku its quite simple.

The problem variables are numbers in cells i, j. We can encode each number using four variables, x(i,j,k), and requiring that exactly one is true. We could also use only two "bits" to encode four options (so called binary encoding), but using one-bit per option makes easier to encode the sudoku rules.

Recall the sudoku rules: each number have to occur exactly once in each row, column and subsquare.

With our number encoding the puzzle rules are easy to encode. For example for each row i and number k we require that exactly one literal in x(i,j,k), j <- [1..4] is true And similarly for columns and subsquares.

For what it's worth, sudoku can be very neatly encoded using Applicatives and Traversables. See the StackOverflow answer by Conor McBride.

SAT solvers consume a DIMACS format which looks like:

p cnf 64 453
-60 -64 0
-48 -64 0
-48 -60 0
-44 -64 0
-44 -60 0
-44 -48 0
44 48 60 64 0
-59 -63 0
-47 -63 0
-47 -59 0
-43 -63 0
-43 -59 0
-43 -47 0
43 47 59 63 0
-58 -62 0
...

Borrowing the DIMACS format explanation from varisat's documentation:

A DIMACS file begins with a header line of the form p cnf <variables> <clauses>. Where <variables> and <clauses> are replaced with decimal numbers indicating the number of variables and clauses in the formula. Following the header line are the clauses of the formula. The clauses are encoded as a sequence of decimal numbers separated by spaces and newlines. For each clause the contained literals are listed followed by a 0.

The above is beginning of encoding of our sudoku problem. There are 4 × 4 cells and each number uses 4 variables, so in total there are 64 variables.

The exactly once encoding I used is done using naive (binomial) at most one encoding. You can see a pattern:

-60 -64 0
-48 -64 0
-48 -60 0
-44 -64 0
-44 -60 0
-44 -48 0
44 48 60 64 0

The last line is requiring that at least one of four variables (44, 48, 60, 64) is true. The first 6 lines are pairwise requirements that at most one of the variables is true: n over 2 i.e. 6 pairs. All of sudoku rules are such exactly one of four constraints, which we have 64 in total: 16 for digits, 4 rows, columns and subsquares with four numbers, That is 7 × 64 = 448 clauses.

The final 5 clauses are initial value constraints. As we know 5 numbers, we tell that the following bits must be true.

The sudoku.cnf file indeed ends with five unit clauses:

...
43 0
30 0
23 0
12 0
5 0

When we run the SAT solver, e.g. z3 -dimacs sudoku.cnf it will immediately give a solution which looks something like

sat
-1 2 -3 -4 5 -6 -7 -8 -9 -10 -11 12 -13 -14 15 -16
-17 -18 -19 20 -21 -22 23 -24 25 -26 -27 -28 -29 30 -31 -32
33 -34 -35 -36 -37 38 -39 -40 -41 -42 43 -44 -45 -46 -47 48
-49 -50 51 -52 -53 -54 -55 56 -57 58 -59 -60 61 -62 -63 -64 

For each of 64 variables it prints whether the satisfying assignment for that variable is true (positive) or false (negative).

When we decode the solution we'll get a solution to our sudoku puzzle:

┌─────┬─────┐
│ 2 1 │ 4 3 │
│ 4 3 │ 1 2 │
├─────┼─────┤
│ 1 2 │ 3 4 │
│ 3 4 │ 2 1 │
└─────┴─────┘

Encoding as cabal automatic flags

So how can we encode a SAT problem as flag selection one?

It is hopefully obvious that each variable will be represented by an automatic flag, i.e. a flag which solver can (should) choose an assignment for:

flag 1
  manual: False
  default: False

(Yes, flag names can be "numbers", they are still treated as strings).

The default value shouldn't matter, but it's probably better to pick False as most variables in sudoku problem are indeed false.

Let's next think how to encode clauses. When the CNF is satisfiable each clause should evaluate to true. When CNF is unsatisfiable it's enough that any clause evaluate to false. Recall clauses are disjunctions of literals:

x₁ ∨ ¬ x₂ ∨ ¬ x₃ ∨ x₄

Then we can encode such clause as a conditional in a component stanza of .cabal file. There shouldn't be an install plan if a clause value if false:

if !(flag(1) || !flag(2) || !flag(3) || flag(4))
  build-depends: unsatisfiable <0

or equivalently:

if !flag(1) && flag(2) && flag(3) && !flag(4))
  build-depends: unsatisfiable <0

The library stanze in the resulting sudoku.cabal file looks like

library
  if flag(60) && flag(64)
    build-depends: unsatisfiable <0

  if flag(48) && flag(64)
    build-depends: unsatisfiable <0

  if flag(48) && flag(60)
    build-depends: unsatisfiable <0

  if flag(44) && flag(64)
    build-depends: unsatisfiable <0

  if flag(44) && flag(60)
    build-depends: unsatisfiable <0

  if flag(44) && flag(48)
    build-depends: unsatisfiable <0

  if !flag(44) && !flag(48) && !flag(60) && !flag(64)
    build-depends: unsatisfiable <0

...

  if !flag(43)
    build-depends: unsatisfiable <0

  if !flag(30)
    build-depends: unsatisfiable <0

  if !flag(23)
    build-depends: unsatisfiable <0

  if !flag(12)
    build-depends: unsatisfiable <0

  if !flag(5)
    build-depends: unsatisfiable <0
...

And we can ask cabal-install to construct an install plan with

cabal build --dry-run

On my machine it took 17 seconds to complete. (I actually didn't know what to expect, I'd say 17 seconds is not bad).

cabal-install writes out plan.json file which contains the build plan. It's a JSON file which can be inspected directly or queried with cabal-plan utility.

cabal-plan topo --show-flags

shows

sudoku-0 -1 -10 -11 +12 -13 -14 +15 -16 -17 -18 -19 +2 +20 -21 -22 +23
  -24 +25 -26 -27 -28 -29 -3 +30 -31 -32 +33 -34 -35 -36 -37 +38 -39 -4
  -40 -41 -42 +43 -44 -45 -46 -47 +48 -49 +5 -50 +51 -52 -53 -54 -55
  +56 -57 +58 -59 -6 -60 +61 -62 -63 -64 -7 -8 -9

there is only one package in the install plan, and we asked cabal-plan to also show the flag assignment, which it does. The output is almost the same as from z3!

If we decode this solution, we get the same answer:

┌─────┬─────┐
│ 2 1 │ 4 3 │
│ 4 3 │ 1 2 │
├─────┼─────┤
│ 1 2 │ 3 4 │
│ 3 4 │ 2 1 │
└─────┴─────┘

Conclusion

We successfully used cabal-install dependency solver as a SAT solver. It is terribly slow, but it's probably still faster at solving 2×2 sudoku puzzle than myself. The code is available on GitHub if you want to play with it.

However, it is not unheard that we need to encode some logical constraints in cabal file.

For example transformers-compat encodes which transformers version it depends on using kind of unary encoding: each bucket is encoded using single flag: Removing some unrelated bits:

...

  if flag(four)
    build-depends: transformers >= 0.4.1 && < 0.5
  else
    build-depends: transformers < 0.4 || >= 0.5

  if flag(three)
    build-depends: transformers >= 0.3 && < 0.4
  else
    build-depends: transformers < 0.3 || >= 0.4

...

The choice of transformers versions forces assignments to the automatic flags (four, three, ...) and then we can alter build info of a package based on that.

That is an indirect way of writing (encoding!) something like

  if build-depends(transformers >= 0.4.1 && <0.5)
    ...

  if build-depends(transformers >= 0.3 && <0.4)
    ...

A common example in the past was adding old-locale dependency when the old version of time library was picked:

  if flag(old-locale)
    build-depends:
        old-locale  >=1.0.0.2 && <1.1
      , time        >=1.4     && <1.5

  else
    build-depends: time >=1.5 && <1.7

which could be written as

  build-depends: time >=1.4 && <1.7
  if build-depends(time < 1.5)
    build-depends: old-locale >=1.0.0.2 && <1.1

Another example is functor-classes-compat which also encodes transformers and base version subsets, but it is using binary encoding of four options. There the implied constraints are also (hopefully) disjoint making flag assignment deterministic.

I think that automatic flags are good feature to have. It is a basic building block, but is a "low-level" feature. On the other hand if build-depends(...) construct is more difficult to use wrong, and probably covers 99% of the use cases for automatic flags. If you are mindful that you encoding if build-depends (...) constraint, then you'll probably use cabal's automatic flags correctly. Conversely, if you are using automatic flags to encode something else, like solving general SAT problems, most likely you are doing something wrong.

August 30, 2023 12:00 AM

August 28, 2023

Michael Snoyman

Type Safety Doesn't Matter

I'm a huge believer in using strongly typed languages and leveraging type level protections in my codebases. But I'd like to clarify my new, somewhat modified stance on this:

Type safety does not matter.

What I mean is that, on its own, type safety is not important. It's only useful because of what it accomplishes: moving errors from runtime to compile time. Even that isn't a goal on its own. The real goal is reducing runtime errors. Type safety is one of the best methods of achieving these cascading goals, but it's far from the only one.

This may sound pedantic and click-baity, but in my opinion it's a vitally important distinction with real world ramifications. For example, when discussing architecture of code or reviewing a pull request, I will often times push back on changes that add more complexity in the type system. The reason is because, even if a change adds "type safety," this extra complexity is only warranted if it achieves our primary goal, namely reducing runtime errors.

Such an assessment is largely speculative, subjective, and risk-based. By that last point, I'm tapping into my actuarial background. The idea is that, when considering a code change, the question will always be: do I think there's a high likelihood that this change will meaningfully reduce bug count in the long term more so than other activities I could be spending this time on? And if you watched my talk the economic argument for functional programming (or read the slides), you may be familiar with this way of thinking as the opportunity cost of spending more time on type safety.

This is why languages that provide for strong typing with type inference end up working out so well. There's relatively little cost for basic type safety mechanisms with significant gain. It's the 80/20 rule. I continue to believe that the vast majority of the value I've received from strongly typed languages like Rust, Haskell, and even TypeScript come from the "simplest" features like enums/ADTs and pattern matching.

Bug reduction is not the only benefit of strong typing. There's also: easier codebase maintainability, simplicity of refactoring, new engineering onboarding, potentially performance gains, and probably a few other things I missed. But for me, reduction in bugs is still the primary benefit.

This paradigm of assessing the value in bug reduction from type safety lets us broaden our scope a bit. If we want to reduce bugs in production, and we believe that moving bugs from runtime to compile time is a good way to do it, we can naturally find some related techniques. An obvious one is "static code analysis." But I'll simplify that with the 80/20 rule as well to linting tools. Using linting tools is a great way to get lots of benefits with little cost.

Just to prove this isn't only about types, let's take a concrete example from everyone's favorite language, JavaScript. If I'm writing a React application in JavaScript, I get virtually no type safety. (TypeScript is a different story, and it's the only thing that keeps me sane when working on frontend code.) Consider this bit of almost-correct React code:

const { userName, gameLevel } = props
const [userScore, setUserScore] = useState()

useEffect(() => {
    const helper = async () => {
        const res = await fetchUserScore(userName, gameLevel)
        const body = await res.json()
        setUserScore(body.score)
    }
    helper()
}, [userName, setUserScore])

For those not familiar: useEffect allows me to run some kind of an action, in this case an asynchronous data load from a server. This is a common pattern in React. As the user is using this application and changes the game level, I want to perform an action to load up their current score from the server and set it in a local store that can be used by the rest of the application. useEffect takes two arguments: the function to perform, and the list of dependencies to use. When one of those dependencies changes, the effect is rerun.

There are plenty of improvements to be made in this code, but there's one blatant bug: my useEffect dependency list does not include gameLevel. This would be a bug at runtime: once the user's score is loaded for a level, we would never reload it despite moving on to other levels. This would be the kind of bug that is easy to miss during manual testing, and could end up in production pretty easily.

Automated testing, unit tests, QA acceptance guidelines... basically everything around quality assurance will help ameliorate bugs like this. But static analysis arguably does even better here. The above code will immediately trigger lints saying "hey, I see you used gameLevel in your function, but you didn't list it in your dependencies." This is a prime example of moving a bug from runtime to compile time (or at least development time), preventing an entire class of bugs from occurring, and it didn't need any type safety to do it. Sure, it doesn't eliminate every potential bug, but it does knock down a whole bunch of them.

As you might imagine, this blog post was inspired by a specific set of problems I was running into at work. I thought about getting into those details here, and if there's interest I can write a follow-up blog post, but honestly the specific case isn't terribly interesting. My point here is the general principles:

  1. Understand why you're trying to use type safety. Is it preventing some kind of a bug from occurring? Is the time you're spending on implementing the type-safe solution paying off in bug reduction and other benefits?
  2. There are lots of other techniques worth considering for bug reduction. Static analysis is one I mentioned. Automated testing falls into this category as well. Don't be ideologically driven in which approaches you use. Choose the tool with the best power-to-weight ratio for what you're dealing with right now.

August 28, 2023 12:00 AM

August 25, 2023

GHC Developer Blog

GHC 9.4.7 is now available

GHC 9.4.7 is now available

Zubin Duggal - 2023-08-25

The GHC developers are happy to announce the availability of GHC 9.4.7. Binary distributions, source distributions, and documentation are available at downloads.haskell.org.

This release is primarily a bugfix release addressing some issues found in 9.4.7. These include:

  • A bump to bytestring-0.11.5.2 allowing GHC to be bootstrapped on systems where the bootstrap compiler is built with the pthread_condattr_setclock symbol available (#23789).
  • A number of bug fixes for scoping bugs in the specialiser, preventing simplifier panics (#21391, #21689, #21828, #23762).
  • Distributing dynamically linked alpine bindists (#23349, #23828).
  • A bug fix for the release notes syntax, allowing them to built on systems with older python and sphinx versions (#23807, #23818).
  • … and a few more. See the release notes for a full accounting.

We would like to thank Microsoft Azure, GitHub, IOG, the Zw3rk stake pool, Well-Typed, Tweag I/O, Serokell, Equinix, SimSpace, Haskell Foundation, and other anonymous contributors whose on-going financial and in-kind support has facilitated GHC maintenance and release management over the years. Finally, this release would not have been possible without the hundreds of open-source contributors whose work comprise this release.

As always, do give this release a try and open a ticket if you see anything amiss.

Happy compiling,

  • Zubin

by ghc-devs at August 25, 2023 12:00 AM

August 23, 2023

Sandy Maguire

Certainty by Construction Progress Report 8

The following is a progress report for Certainty by Construction, a new book I’m writing on learning and effectively wielding Agda. Writing a book is a tedious and demoralizing process, so if this is the sort of thing you’re excited about, please do let me know!


Eight days away from my deadline. How’s it going? Hectic.

I’ve been in a flurry of editing for the last two weeks. As of right now, I’m currently editing page 138/252. At this rate, it’s not looking promising, but I did just buy a flat of Red Bull, so you never know.

Besides editing, what’s new? Lots of minor typesetting stuff, like which paragraphs should be indented. I also did a pass through all the Agda modules with their new, final names, in easy searchable format. Along with that, the end of each chapter now has an explicit export list, which subsequent chapters import (rather than getting it from the stdlib.) This means you can see at a glance whether a chapter has prerequisites you need to read first! Minor stuff, but Nintendo polish nevertheless.

I had to rewrite a good chunk of chapter 2, and a lot of the prose in chapter 3 is from a very early edition of the book, and doesn’t have the shine as the rest of it. So that’s getting reworked too. My hope is that the later chapters were written more recently, and therefore will require less elbow grease. It’s plausible, and would be greatly appreciated. But I fear that the setoids chapter needs a lot of work, and I’m just trying my best to ignore it. For now.

In other news, I’m now uploading nightly builds to Leanpub in order to keep myself honest. There’s no indication of which half of the book has been edited and which hasn’t, but that seems like a good idea I should adopt for the next build. That way particularly dedicated readers could follow along and see just how quickly I can get material cleaned up. And it will prevent me from accidentally forgetting where I was and re-editing it all again. Which has happened several times, somehow.

Okay that’s enough of an update. Back to the grind. Love y’all.

August 23, 2023 12:00 AM

GHC Developer Blog

GHC 9.8.1-alpha3 is now available

GHC 9.8.1-alpha3 is now available

bgamari - 2023-08-23

The GHC developers are very pleased to announce the availability of the third alpha prerelease of GHC 9.8.1. Binary distributions, source distributions, and documentation are available at downloads.haskell.org.

GHC 9.8 will bring a number of new features and improvements, including:

  • Preliminary support the TypeAbstractions language extension, allowing types to be bound in type declarations.

  • Support for the ExtendedLiterals extension, providing more consistent support for non-word-sized numeric literals in the surface language

  • Improved rewrite rule matching behavior, allowing limited matching of higher-order patterns

  • Better support for user-defined warnings by way of the WARNING pragma

  • The introduction of the new GHC.TypeError.Unsatisfiable constraint, allowing more predictable user-defined type errors

  • Implementation of the export deprecation proposal, allowing module exports to be marked with DEPRECATE pragmas

  • The addition of build semaphore support for parallel compilation, allowing better use of parallelism across GHC builds

  • More efficient representation of info table provenance information, reducing binary sizes by nearly 80% in some cases when -finfo-table-map is in use

A full accounting of changes can be found in the release notes. This alpha includes roughly a dozen changes relative to alpha 2, including what we believe should be nearly the last changes to GHC’s boot libraries.

We would like to thank GitHub, IOG, the Zw3rk stake pool, Well-Typed, Tweag I/O, Serokell, Equinix, SimSpace, the Haskell Foundation, and other anonymous contributors whose on-going financial and in-kind support has facilitated GHC maintenance and release management over the years. Finally, this release would not have been possible without the hundreds of open-source contributors whose work comprise this release.

As always, do give this release a try and open a ticket if you see anything amiss.

by ghc-devs at August 23, 2023 12:00 AM

August 22, 2023

Brent Yorgey

Swarm 0.4 release

The Swarm development team is very proud to announce the latest release of the game. This should still be considered a development/preview release—you still can’t save your games—but it’s made some remarkable progress and there are lots of fun things to try.

What is it?

As a reminder, Swarm is a 2D, open-world programming and resource gathering game with a strongly-typed, functional programming language and a unique upgrade system. Unlocking language features is tied to collecting resources, making it an interesting challenge to bootstrap your way into the use of the full language. It has also become a flexible and powerful platform for constructing programming challenges.

A few of the most significant new features are highlighted below; for full details, see the release notes. If you just want to try it out, see the installation instructions.

Expanded design possibilities

The default play mode is the open-world, resource-gathering scenario—but Swarm also supports “challenge scenarios”, where you have to complete one or more specific objectives with given resources on a custom map. There are currently 58 scenarios and counting—some are silly proofs of concept, but many are quite fun and challenging! I especially recommend checking out the Ranching and Sokoban scenarios, as well as A Frivolous Excursion (pictured below). And creating new scenarios is a great way you can contribute to Swarm even if you don’t know Haskell, or aren’t comfortable hacking on the codebase.

Recently, a large amount of work has gone into expanding the possibilities for scenario design:

  • Structure templates allow you to design map tiles and then reuse them multiple times within a scenario.
  • Waypoints and portals provide a mechanism for automatically navigating and teleporting around the world.
  • Scenarios can have multiple subworlds besides the main “overworld”, connected by portals. For example you could go “into a building” and have a separate map for the building interior.
  • There are a slew of new robot commands, many to do with different sensing modalities: stride, detect, sniff, chirp, resonate, watch, surveil, scout, instant, push, density, use, halt, and backup.
  • A new domain-specific language for describing procedurally generated worlds. The default procedurally generated world used to be hardcoded, but now it is described externally via the new DSL, and you can design your own procedurally generated worlds without editing the Swarm source code.
  • The key input handler feature allows you to program robots to respond to keyboard input, so you can e.g. drive them around manually, or interactively trigger more complex behaviors. This makes it possible to design “arcade-style” challenges, where the player needs to guide a robot and react to obstacles in real time—but they get to program the robot to respond to their commands first!
  • A new prototype integrated world editor lets you design worlds interactively.

UI improvements

In the past, entity and goal descriptions were simply plain text; recently, we switched to actually parsing Markdown. Partly, this is just to make things look nice, since we can highlight code snippets, entity names, etc.:

But it also means that we can now validate all code examples and entity names, and even test that the tutorial is pedagogically sound: any command used in a tutorial solution must be mentioned in a previous tutorial, or else our CI fails!

There are also a number of other small UI enhancements, such as improved type error messages, inventory search, and a collapsible REPL panel, among others.

Scoring metrics

We now keep track of a number of metrics related to challenge scenario solutions, such as total time, total game ticks, and code size. These metrics are tracked and saved across runs, so you can compete with yourself, and with others. For now, see these wiki pages:

In the future, perhaps there will eventually be some kind of social website with leaderboards and user-uploaded scenarios.

Debugging

Last but not least, we now have an integrated single-stepping and debugging mode (enabled by the tweezers device).

Give it a try!

To install, check out the installation instructions: you can download a binary release (for now, Linux only, but MacOS binaries should be on the horizon), or install from Hackage. Give it a try and send us your feedback, either via a github issue or IRC!

Future plans & getting involved

We’re still hard at work on the game. Fun upcoming things include:

Of course, there are also tons of small things that need fixing and polishing too! If you’re interested in getting involved, check out our contribution guide, come join us on IRC (#swarm on Libera.Chat), or take a look at the list of issues marked “low-hanging fruit”.

Brought to you by the Swarm development team:

  • Brent Yorgey
  • Karl Ostmo
  • Ondřej Šebek

With contributions from:

  • Alexander Block
  • Brian Wignall
  • Chris Casinghino
  • Daniel Díaz Carrete
  • Huw Campbell
  • Ishan Bhanuka
  • Jacob
  • Jens Petersen
  • José Rafael Vieira
  • Joshua Price
  • lsmor
  • Noah Yorgey
  • Norbert Dzikowski
  • Paul Brauner
  • Ryan Yates
  • Sam Tay
  • Steven Garcia
  • Tamas Zsar
  • Tristan de Cacqueray
  • Valentin Golev

…not to mention many others who gave valuable suggestions and feedback. Want to see your name listed here in the next release? See how you can contribute!

by Brent at August 22, 2023 05:37 PM

August 21, 2023

Dan Piponi (sigfpe)

What does it mean for a monad to be strong?

This is something I put on github years ago but I probably should have put it here.


Here's an elementary example of the use of the list monad:


> test1 = do
>   x <- [1, 2]
>   y <- [x, 10*x]
>   [x*y]


We can desugar this to:


> test2 = [1, 2] >>= \x -> [x, 10*x] >>= \y -> [x*y]


It looks like we start with a list and then apply a sequence (of length 2) of functions to it using bind (>>=). This is probably why some people call monads workflows and why the comparison has been made with Unix pipes.


But looks can be deceptive. The operator (>>=) is right associative and test2 is the same as test3:


> test3 = [1, 2] >>= (\x -> [x, 10*x] >>= \y -> [x*y])


You can try to parenthesise the other way:


> -- test4 = ([1, 2] >>= \x -> [x, 10*x]) >>= \y -> [x*y]


We get a "Variable not in scope: x" error. So test1 doesn't directly fit the workflow model. When people give examples of how workflow style things can be seen as monads they sometimes use examples where later functions don't refer to variables defined earlier. For example at the link I gave above the line m >>= x-> (n >>= y-> o) is transformed to (m >>= x-> n) >>= y-> o which only works if o makes no mention of x. I found similar things to be true in a number of tutorials, especially the ones that emphasise the Kleisli category view of things.


But we can always "reassociate" to the left with a little bit of extra work. The catch is that the function above defined by y-> ... "captures" x from its environment. So it's not just one function, it's a family of functions parameterised by x. We can fix this by making the dependence on x explicit. We can then pull the inner function out as it's no longer implicitly dependent on its immediate context. When compilers do this it's called lambda lifting.


Define (the weirdly named function) strength by


> strength :: Monad m => (x, m y) -> m (x, y)
> strength (x, my) = do
>   y <- my
>   return (x, y)


It allows us to smuggle x "into the monad".


And now we can rewrite test1, parenthesising to the left:


> test5 = ([1, 2] >>= \x -> strength (x, [x, 10*x])) >>= \(x, y) -> [x*y]


This is much more like a workflow. Using strength we can rewrite any (monadic) do expression as a left-to-right workflow, with the cost of having to throw in some applications of strength to carry along all of the captured variables. It's also using a composition of arrows in the Kleisli category.


A monad with a strength function is called a strong monad. Clearly all Haskell monads are strong as I wrote strength to work with any Haskell monad. But not all monads in category theory are strong. It's a sort of hidden feature of Haskell (and the category Set) that we tend not to refer to explicitly. It could be said that we're implicitly using strength whenever we refer to earlier variables in our do expressions.


See also nlab.


> main = do
>   print test1
>   print test2
>   print test3
>   -- print test4
>   print test5

by sigfpe (noreply@blogger.com) at August 21, 2023 03:23 PM

August 16, 2023

Philip Wadler

Orwell was right

 


A short comic by Mike Dawson. Stick to the end for a valid point.

by Philip Wadler (noreply@blogger.com) at August 16, 2023 12:32 PM

August 12, 2023

Sandy Maguire

Certainty by Construction Progress Report 7

The following is a progress report for Certainty by Construction, a new book I’m writing on learning and effectively wielding Agda. Writing a book is a tedious and demoralizing process, so if this is the sort of thing you’re excited about, please do let me know!


Where has this dingus Sandy been?? Busy busy busy! I’m in the middle of planning a wedding (my own), as well as just finished being the best man at my friend’s wedding. Plus getting the tax man’s records all sorted out for him, and a bunch of other things that fell into the “urgent” AND “important” categories.

Yeesh. Enough excuses though. I’m back and haven’t given up on any of this!

These days I’m calling the book “essentially done,” and all that is required is extensive editing. Which I’ve been doing. Every day on the bus I’m reading my PDF copy and making notes in the margin. Then I get home and go through the notes and clean up the prose.

It’s slow going, but that’s the way of the world. The prose is getting dramatically tightened up, however. It’s kind of fun to go through, be aware of the point I’m trying to make, and realize that I haven’t actually made it. I’m not calling this “rewriting,” but most paragraphs are changing dramatically.

Today I also sat down and hashed out a bunch of the technical pipeline issues I’ve been putting off for a year. Like getting section references working. So now instead of saying “as in sec:propeq?”, the prose now says “as in section 3.2”. The annotations have always been there, but getting the build to actually put in the text has taken away several hours of my life.

More excitingly, I also managed to get inline code snippets properly highlighted—and, even better, broken code now also highlights. This is a resounding achievement, because the whole idea of literate Agda is that it must compile. And the compiler is what generates the syntax highlighting. It’s a terrifying marvel of engineering, but it does work.

So that’s all. I’m just going to push on this book thing until it’s done. Or until September 1. Whichever comes sooner. That’s a terrifying thought, so I guess I’d better get back to it.

August 12, 2023 12:00 AM

August 10, 2023

Tweag I/O

Matt Parsons

The Meaning of Monad in MonadTrans

At work, someone noticed that they got a compiler warning for a derived instance of MonadTrans.

newtype FooT m a = FooT { unFooT :: StateT Int m a }
    deriving newtype
        (Functor, Applicative, Monad, MonadTrans)

GHC complained about a redundant Monad constraint. After passing -ddump-deriv, I saw that GHC was pasting in basically this instance:

instance MonadTrans FooT where
    lift :: Monad m => m a -> t m a
    lift = coerce (lift :: m a -> StateT Int m a)

The problem was that the Monad m constraint there is redundant - we’re not actually using it. However, it’s a mirror for the definition of the class method.

In transformers < 0.6, the definition of MonadTrans class looked like this:

class MonadTrans t where
    lift :: Monad m => m a -> t m a

In transformers-0.6, a quantified superclass constraint was added to MonadTrans:

class (forall m. Monad m => Monad (t m)) => MonadTrans t where
    lift :: Monad m => m a -> t m a

I’m having a bit of semantic satiation with the word Monad, which isn’t an unfamiliar phenomenon for a Haskell developer. However, while explaining this behavior, I found it to be a very subtle distinction in what these constraints fundamentally mean.

What is a Constraint?

A Constraint is a thing in Haskell that GHC needs to solve in order to make your code work. Solving a constraint is similar to proving a proposition in constructive logic - GHC needs to find evidence that the claim holds, in the form of a type class instance.

When we write:

foo :: Num a => a -> a
foo x = x * 2

We’re saying:

I have a polymorphic function foo which can operate on types, if those types are instances of the class Num.

If is the big thing here - it’s a way of making a conditional expression. For a totally polymorphic function, like id :: a -> a, there are no conditions. You can call it with any type you want. But a conditional polymorphic function expresses some requirements, or constraints, upon the input.

If you ask for constraints you don’t need, then you can get a warning by enabling -Wredundant-constraints.

woops :: (Bounded a, Num a) => a -> a
woops x = x + 5

GHC will happily warn us that we don’t actually use the Bounded a constraint, and it’s redundant. We should delete it. Indeed, there are many Num that aren’t Bounded, and by requiring Bounded, we are reducing the potential types we could call this function on for no reason.

Constraints Liberate

(from Constraints Liberate, Liberties Constrain)

A constraint is a perspective on what is happening - it is the perspective of the caller of a function. It’s almost like I see a function type:

someCoolFunction :: _ => a -> b

And think - “Ah hah! I can call this at any type a and b that I want!” Only to find that there’s a bunch of constraints on a and b, and now I am constrained in the types I can call this function at.

However, a constraint feels very different from the implementer of a function. Let’s look at the classic identity function:

id :: a -> a
id a = a

As an implementer, I have quite a few constraints! Indeed, I can’t really do anything here. I can write equivalent functions, or much slower versions of this function, or I can escape hatch with unsafe behavior - but my options are really pretty limited.

id' :: a -> a
id' a = repeat a !! 1000

id'' :: a -> a
id'' a = let y = a in y

id''' :: a -> a
id''' a = iterate id' a !! 1000

However, a Constraint means that I now have some extra power.

foo :: Num a => a -> a

With this signature, I now have access to the Num type class methods, as well as any other function that is polymorphic over Num. The constraint is a liberty - I have gained the power to do stuff with the input.

The Two Monads

Back to MonadTrans -

class
    (forall m. Monad m => Monad (t m))
  =>
    MonadTrans t
  where
    lift :: Monad m => m a -> t m a

Method Constraint

Let’s talk about that lift constraint.

    lift :: Monad m => m a -> t m a

This constraint means that the input to lift must prove that it is a Monad. This means that, as implementers of lift, we can use the methods on Monad in order to make lift work out. We often don’t need it - consider these instances.

newtype IdentityT m a = IdentityT (m a)

instance MonadTrans IdentityT where
    lift action = IdentityT action

IdentityT can use the constructor directly, and does not require any methods at all to work with action.

newtype ReaderT r m a = ReaderT (r -> m a)

instance MonadTrans (ReaderT r) where
    lift action = ReaderT $ \_ -> action

ReaderT uses the constructor and throws away the r.

newtype ExceptT e m a = ExceptT (m (Either e a))

instance MonadTrans (ExceptT e) where
    lift action = ExceptT $ fmap Right action

Ah, now we’re using the Functor method fmap in order to make the inner action fit. We’re given an action :: m a, and we need an m (Either e a). And we’ve got Right :: a -> Either e a and fmap to make it work. We are allowed to call fmap here because Monad implies Applicative implies Functor, and we’ve been given the Monad m constraint as part of the method.

Superclass Constraint

Let’s talk about the quantified superclass constraint (wow, what a fancy phrase).

    (forall m. Monad m => Monad (t m))

This superclass constraint means that the type t m a is a Monad if m is a Monad, and this is true for all m, not just a particular one. Prior to this, if you wanted to write a do block thta was arbitrary in a monad transformer, you’d have to write:

ohno :: (Monad (t m), Monad m, MonadTrans t) => m a -> t m a
ohno action = do
    lift action
    lift action

What’s more annoying is that, if you had a few different underlying type parameters, you’d need to request the Monad (t m) for each one - Monad (t m), Monad (t n), Monad (t f). Boring and redundant. Obviously, if m is a Monad, and t is a MonadTransformer, then t m must also be a Monad - otherwise it’s not a valid MonadTransformer!

So this Constraint is slightly different. The first perspective on Constraint is the user of a function:

I am constrained in the types I can use a function with

The second perspective on Constraint is the implementer of a function:

By constraining my inputs, I gain knowledge and power over them

But this superclass constraint is a bit different. It doesn’t seem to be about requiring anything from our users. It also doesn’t seem to be about allowing more power for implementers.

Instead, it’s a form of evidence propagation. We’re saying:

GHC, if you know that m is a Monad, then you may also infer that t m is a Monad.

Type classes form a compositional tool for logic programming. Constraints like these are a conditional proposition that allow GHC to see more options for solving and satisfying problems.

The Koan of Constraint

Four blind monks touch a Constraint, trying to identify what it is.

The first exclaims “This is a tool for limiting people.” The second laughs and says, “No, this is a tool for empowering people!” The third shakes his head solemnly and retorts, “No, this is a tool for clarifying wisdom.”

The fourth says “I cannot satisfy this.”

August 10, 2023 12:00 AM

August 07, 2023

Philip Wadler

The Problem with Counterfeit People

 


A sensible proposal in The Atlantic from philosopher Daniel Dennett. Is anyone campaigning for a law or regulation to this effect?

Creating counterfeit digital people risks destroying our civilization. Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation.

There may be a way of at least postponing and possibly even extinguishing this ominous development, borrowing from the success—limited but impressive—in keeping counterfeit money merely in the nuisance category for most of us (or do you carefully examine every $20 bill you receive?).

As [historian Yuval Noah] Harari says, we must “make it mandatory for AI to disclose that it is an AI.” How could we do that? By adopting a high-tech “watermark” system like the EURion Constellation, which now protects most of the world’s currencies. The system, though not foolproof, is exceedingly difficult and costly to overpower—not worth the effort, for almost all agents, even governments. Computer scientists similarly have the capacity to create almost indelible patterns that will scream FAKE! under almost all conditions—so long as the manufacturers of cellphones, computers, digital TVs, and other devices cooperate by installing the software that will interrupt any fake messages with a warning.

by Philip Wadler (noreply@blogger.com) at August 07, 2023 09:38 PM

Will A.I. become the new McKinsey?


Ted Chiang, ever thoughtful, suggests a new metaphor for A.I. Published in The New Yorker.
So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.

A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.

by Philip Wadler (noreply@blogger.com) at August 07, 2023 09:09 PM

July 31, 2023

Philip Wadler

Our Labor Built AI

 



An introduction for laymen from The Nib. By Dan Nott and Scott Cambo.

by Philip Wadler (noreply@blogger.com) at July 31, 2023 12:53 PM

Chris Reade

Diagrams for Penrose Tiles

Penrose Kite and Dart Tilings with Haskell Diagrams

Revised version (no longer the full program in this literate Haskell)

Infinite non-periodic tessellations of Roger Penrose’s kite and dart tiles.

filledSun6
filledSun6

As part of a collaboration with Stephen Huggett, working on some mathematical properties of Penrose tilings, I recognised the need for quick renderings of tilings. I thought Haskell diagrams would be helpful here, and that turned out to be an excellent choice. Two dimensional vectors were well-suited to describing tiling operations and these are included as part of the diagrams package.

This literate Haskell uses the Haskell diagrams package to draw tilings with kites and darts. It also implements the main operations of compChoices and decompPatch which are essential for constructing tilings (explained below).

Firstly, these 5 lines are needed in Haskell to use the diagrams package:

{-# LANGUAGE NoMonomorphismRestriction #-}
{-# LANGUAGE FlexibleContexts          #-}
{-# LANGUAGE TypeFamilies              #-}
import Diagrams.Prelude
import Diagrams.Backend.SVG.CmdLine

and we will also import a module for half tiles (explained later)

import HalfTile

These are the kite and dart tiles.

Kite and Dart
Kite and Dart

The red line marking here on the right hand copies, is purely to illustrate rules about how tiles can be put together for legal (non-periodic) tilings. Obviously edges can only be put together when they have the same length. If all the tiles are marked with red lines as illustrated on the right, the vertices where tiles meet must all have a red line or none must have a red line at that vertex. This prevents us from forming a simple rombus by placing a kite top at the base of a dart and thus enabling periodic tilings.

All edges are powers of the golden section \phi which we write as phi.

phi::Double
phi = (1.0 + sqrt 5.0) / 2.0

So if the shorter edges are unit length, then the longer edges have length phi. We also have the interesting property of the golden section that phi^2 = phi + 1 and so 1/phi = phi-1, phi^3 = 2phi +1 and 1/phi^2 = 2-phi.

All angles in the figures are multiples of tt which is 36 deg or 1/10 turn. We use ttangle to express such angles (e.g 180 degrees is ttangle 5).

ttangle:: Int -> Angle Double
ttangle n = (fromIntegral (n `mod` 10))*^tt
             where tt = 1/10 @@ turn

Pieces

In order to implement compChoices and decompPatch, we need to work with half tiles. We now define these in the separately imported module HalfTile with constructors for Left Dart, Right Dart, Left Kite, Right Kite

data HalfTile rep = LD rep -- defined in HalfTile module
                  | RD rep
                  | LK rep
                  | RK rep

where rep is a type variable allowing for different representations. However, here, we want to use a more specific type which we will call Piece:

type Piece = HalfTile (V2 Double)

where the half tiles have a simple 2D vector representation to provide orientation and scale. The vector represents the join edge of each half tile where halves come together. The origin for a dart is the tip, and the origin for a kite is the acute angle tip (marked in the figure with a red dot).

These are the only 4 pieces we use (oriented along the x axis)

ldart,rdart,lkite,rkite:: Piece
ldart = LD unitX
rdart = RD unitX
lkite = LK (phi*^unitX)
rkite = RK (phi*^unitX)
pieces
pieces

Perhaps confusingly, we regard left and right of a dart differently from left and right of a kite when viewed from the origin. The diagram shows the left dart before the right dart and the left kite before the right kite. Thus in a complete tile, going clockwise round the origin the right dart comes before the left dart, but the left kite comes before the right kite.

When it comes to drawing pieces, for the simplest case, we just want to show the two tile edges of each piece (and not the join edge). These edges are calculated as a list of 2 new vectors, using the join edge vector v. They are ordered clockwise from the origin of each piece

pieceEdges:: Piece -> [V2 Double]
pieceEdges (LD v) = [v',v ^-^ v'] where v' = phi*^rotate (ttangle 9) v
pieceEdges (RD v) = [v',v ^-^ v'] where v' = phi*^rotate (ttangle 1) v
pieceEdges (RK v) = [v',v ^-^ v'] where v' = rotate (ttangle 9) v
pieceEdges (LK v) = [v',v ^-^ v'] where v' = rotate (ttangle 1) v

Now drawing lines for the 2 outer edges of a piece is simply

drawPiece:: Piece -> Diagram B
drawPiece = strokeLine . fromOffsets . pieceEdges

It is also useful to calculate a list of the 4 tile edges of a completed half-tile piece clockwise from the origin of the tile. (This is useful for colour filling a tile)

wholeTileEdges:: Piece -> [V2 Double]
wholeTileEdges (LD v) 
   = pieceEdges (RD v) ++ map negated (reverse (pieceEdges (LD v)))
wholeTileEdges (RD v) 
   = tileEdges (LD v)
wholeTileEdges (LK v)
   = pieceEdges (LK v) ++ map negated (reverse (pieceEdges (RK v)))
wholeTileEdges (RK v)
   = tileEdges (LK v)

To fill whole tiles with colours, darts with dcol and kites with kcol we can use leftFillDK. This uses only the left pieces to identify the whole tile and ignores right pieces so that a tile is not filled twice.

leftFillDK:: Colour Double -> Colour Double -> Piece -> Diagram B
leftFillDK dcol kcol c =
  case c of (LD _) -> (strokeLoop $ glueLine $ fromOffsets $ tileEdges c)
                       # fc dcol
            (LK _) -> (strokeLoop $ glueLine $ fromOffsets $ tileEdges c)
                        # fc kcol
            _      -> mempty

To fill half tiles separately, we can use fillPiece which fills without drawing edges of a half tile.

fillPiece:: Colour Double -> Piece -> Diagram B
fillPiece col piece = drawJPiece piece # fc col # lw none

drawJPiece:: Piece -> Diagram B
drawJPiece = strokeLoop . closeLine . fromOffsets . pieceEdges

For an alternative fill operation  we can use fillDK which fills darts and kites with given colours and draws the edges with drawPiece.

fillDK:: Colour Double -> Colour Double -> Piece -> Diagram B
fillDK dcol kcol piece = drawPiece piece <> fillPiece col piece where
    col = case piece of (LD _) -> dcol
           (RD _) -> dcol
           (LK _) -> kcol
           (RK _) -> kcol

By making Pieces transformable we can reuse generic transform operations. These 4 lines of code are required to do this

type instance N (HalfTile a) = N a
type instance V (HalfTile a) = V a
instance Transformable a => Transformable (HalfTile a) where
    transform t ht = fmap (transform t) ht

So we can also scale a piece  and rotate a piece by an angle. (Positive rotations are in the anticlockwise direction.)

scale :: Double -> Piece -> Piece
rotate :: Angle Double -> Piece -> Piece

Patches

A patch is a list of located pieces (each with a 2D point)

type Patch = [Located Piece]

To turn a whole patch into a diagram using some function pd for drawing the pieces, we use

drawPatchWith:: (Piece -> Diagram B) -> Patch -> Diagram B 
drawPatchWith pd patch = position $ fmap (viewLoc . mapLoc pd) patch

Here mapLoc applies a function to the piece in a located piece – producing a located diagram in this case, and viewLoc returns the pair of point and diagram from a located diagram. Finally position forms a single diagram from the list of pairs of points and diagrams.

The common special case drawPatch uses drawPiece on each piece

drawPatch = patchWith drawPiece

Patches are automatically inferred to be transformable now Pieces are transformable, so we can also scale a patch, translate a patch by a vector, and rotate a patch by an angle.

scale :: Double -> Patch -> Patch
rotate :: Angle Double -> Patch -> Patch
translate :: V2 Double -> Patch -> Patch

As an aid to creating patches with 5-fold rotational symmetry, we combine 5 copies of a basic patch (rotated by multiples of ttangle 2 successively).

penta:: Patch -> Patch
penta p = concatMap copy [0..4] 
            where copy n = rotate (ttangle (2*n)) p

This must be used with care to avoid nonsense patches. But two special cases are

sun,star::Patch         
sun =  penta [rkite `at` origin, lkite `at` origin]
star = penta [rdart `at` origin, ldart `at` origin]

This figure shows some example patches, drawn with drawPatch The first is a star and the second is a sun.

tile patches
tile patches

The tools so far for creating patches may seem limited (and do not help with ensuring legal tilings), but there is an even bigger problem.

Correct Tilings

Unfortunately, correct tilings – that is, tilings which can be extended to infinity – are not as simple as just legal tilings. It is not enough to have a legal tiling, because an apparent (legal) choice of placing one tile can have non-local consequences, causing a conflict with a choice made far away in a patch of tiles, resulting in a patch which cannot be extended. This suggests that constructing correct patches is far from trivial.

The infinite number of possible infinite tilings do have some remarkable properties. Any finite patch from one of them, will occur in all the others (infinitely many times) and within a relatively small radius of any point in an infinite tiling. (For details of this see links at the end)

This is why we need a different approach to constructing larger patches. There are two significant processes used for creating patches, namely compChoices and decompPatch.

To understand these processes, take a look at the following figure.

experiment
experiment

Here the small pieces have been drawn in an unusual way. The edges have been drawn with dashed lines, but long edges of kites have been emphasised with a solid line and the join edges of darts marked with a red line. From this you may be able to make out a patch of larger scale kites and darts. This is a composed patch arising from the smaller scale patch. Conversely, the larger kites and darts decompose to the smaller scale ones.

Decomposition

Since the rule for decomposition is uniquely determined, we can express it as a simple function on patches.

decompPatch :: Patch -> Patch
decompPatch = concatMap decompPiece

where the function decompPiece acts on located pieces and produces a list of the smaller located pieces contained in the piece. For example, a larger right dart will produce both a smaller right dart and a smaller left kite. Decomposing a located piece also takes care of the location, scale and rotation of the new pieces.

decompPiece lp = case viewLoc lp of
  (p, RD vd)-> [ LK vd  `at` p
               , RD vd' `at` (p .+^ v')
               ] where v'  = phi*^rotate (ttangle 1) vd
                       vd' = (2-phi) *^ (negated v') -- (2-phi) = 1/phi^2
  (p, LD vd)-> [ RK vd `at` p
               , LD vd' `at` (p .+^ v')
               ]  where v'  = phi*^rotate (ttangle 9) vd
                        vd' = (2-phi) *^ (negated v')  -- (2-phi) = 1/phi^2
  (p, RK vk)-> [ RD vd' `at` p
               , LK vk' `at` (p .+^ v')
               , RK vk' `at` (p .+^ v')
               ] where v'  = rotate (ttangle 9) vk
                       vd' = (2-phi) *^ v' -- v'/phi^2
                       vk' = ((phi-1) *^ vk) ^-^ v' -- (phi-1) = 1/phi
  (p, LK vk)-> [ LD vd' `at` p
               , RK vk' `at` (p .+^ v')
               , LK vk' `at` (p .+^ v')
               ] where v'  = rotate (ttangle 1) vk
                       vd' = (2-phi) *^ v' -- v'/phi^2
                       vk' = ((phi-1) *^ vk) ^-^ v' -- (phi-1) = 1/phi

This is illustrated in the following figure for the cases of a right dart and a right kite.

explanation
explanation

The symmetric diagrams for left pieces are easy to work out from these, so they are not illustrated.

With the decompPatch operation we can start with a simple correct patch, and decompose repeatedly to get more and more detailed patches. (Each decomposition scales the tiles down by a factor of 1/phi but we can rescale at any time.)

This figure illustrates how each piece decomposes with 4 decomposition steps below each one.

four decompositions of pieces
four decompositions of pieces
thePieces =  [ldart, rdart, lkite, rkite]  
fourDecomps = hsep 1 $ fmap decomps thePieces # lw thin where
        decomps pc = vsep 1 $ fmap drawPatch $ take 5 $ decompositionsP [pc `at` origin] 

We have made use of the fact that we can create an infinite list of finer and finer decompositions of any patch, using:

decompositionsP:: Patch -> [Patch]
decompositionsP = iterate decompPatch

We could get the n-fold decomposition of a patch as just the nth item in a list of decompositions.

For example, here is an infinite list of decomposed versions of sun.

suns = decompositionsP sun

The coloured tiling shown at the beginning is simply 6 decompositions of sun displayed using leftFillDK

sun6 = suns!!6
filledSun6 = drawPatchWith (leftFillDK red blue) sun6 # lw ultraThin

The earlier figure illustrating larger kites and darts emphasised from the smaller ones is also sun6 but this time drawn with

experimentFig = drawPatchWith experiment sun6 # lw thin

where pieces are drawn with

experiment:: Piece -> Diagram B
experiment pc = emph pc <> (drawJPiece pc # dashingN [0.002,0.002] 0
                            # lw ultraThin)
  where emph pc = case pc of
   -- emphasise join edge of darts in red
          (LD v) -> (strokeLine . fromOffsets) [v] # lc red
          (RD v) -> (strokeLine . fromOffsets) [v] # lc red 
   -- emphasise long edges for kites
          (LK v) -> (strokeLine . fromOffsets) [rotate (ttangle 1) v]
          (RK v) -> (strokeLine . fromOffsets) [rotate (ttangle 9) v]

Compose Choices

You might expect composition to be a kind of inverse to decomposition, but it is a bit more complicated than that. With our current representation of pieces, we can only compose single pieces. This amounts to embedding the piece into a larger piece that matches how the larger piece decomposes. There is thus a choice at each composition step as to which of several possibilities we select as the larger half-tile. We represent this choice as a list of alternatives. This list should not be confused with a patch. It only makes sense to select one of the alternatives giving a new single piece.

The earlier diagram illustrating how decompositions are calculated also shows the two choices for embedding a right dart into either a right kite or a larger right dart. There will be two symmetric choices for a left dart, and three choices for left and right kites.

Once again we work with located pieces to ensure the resulting larger piece contains the original in its original position in a decomposition.

compChoices :: Located Piece -> [Located Piece]
compChoices lp = case viewLoc lp of
  (p, RD vd)-> [ RD vd' `at` (p .+^ v')
               , RK vk  `at` p
               ] where v'  = (phi+1) *^ vd       -- vd*phi^2
                       vd' = rotate (ttangle 9) (vd ^-^ v')
                       vk  = rotate (ttangle 1) v'
  (p, LD vd)-> [ LD vd' `at` (p .+^ v')
               , LK vk `at` p
               ] where v'  = (phi+1) *^ vd        -- vd*phi^2
                       vd' = rotate (ttangle 1) (vd ^-^ v')
                       vk  = rotate (ttangle 9) v'
  (p, RK vk)-> [ LD vk  `at` p
               , LK lvk' `at` (p .+^ lv') 
               , RK rvk' `at` (p .+^ rv')
               ] where lv'  = phi*^rotate (ttangle 9) vk
                       rv'  = phi*^rotate (ttangle 1) vk
                       rvk' = phi*^rotate (ttangle 7) vk
                       lvk' = phi*^rotate (ttangle 3) vk
  (p, LK vk)-> [ RD vk  `at` p
               , RK rvk' `at` (p .+^ rv')
               , LK lvk' `at` (p .+^ lv')
               ] where v0 = rotate (ttangle 1) vk
                       lv'  = phi*^rotate (ttangle 9) vk
                       rv'  = phi*^rotate (ttangle 1) vk
                       rvk' = phi*^rotate (ttangle 7) vk
                       lvk' = phi*^rotate (ttangle 3) vk

As the result is a list of alternatives, we need to select one to make further composition choices. We can express all the alternatives after n steps as compNChoices n where

compNChoices :: Int -> Located Piece -> [Located Piece]
compNChoices 0 lp = [lp]
compNChoices n lp = do
    lp' <- compChoices lp
    compNChoices (n-1) lp'

This figure illustrates 5 consecutive choices for composing a left dart to produce a left kite. On the left, the finishing piece is shown with the starting piece embedded, and on the right the 5-fold decomposition of the result is shown.

five inflations
five inflations
fiveCompChoices = hsep 1 $ fmap drawPatch [[ld,lk'], multiDecomp 5 [lk']] where 
-- two separate patches
       ld  = (ldart `at` origin)
       lk  = compChoices ld  !!1
       rk  = compChoices lk  !!1
       rk' = compChoices rk  !!2
       ld' = compChoices rk' !!0
       lk' = compChoices ld' !!1

Finally, at the end of this literate haskell program we choose which figure to draw as output.

fig::Diagram B
fig = filledSun6
main = mainWith fig

That’s it. But, What about composing whole patches?, I hear you ask. Unfortunately we need to answer questions like what pieces are adjacent to a piece in a patch and whether there is a corresponding other half for a piece. These cannot be done easily with our simple vector representations. We would need some form of planar graph representation, which is much more involved. That is another story.

Many thanks to Stephen Huggett for his inspirations concerning the tilings. A library version of the above code is available on GitHub

Further reading on Penrose Tilings

As well as the Wikipedia entry Penrose Tilings I recommend two articles in Scientific American from 2005 by David Austin Penrose Tiles Talk Across Miles and Penrose Tilings Tied up in Ribbons.

There is also a very interesting article by Roger Penrose himself: Penrose R Tilings and quasi-crystals; a non-local growth problem? in Aperiodicity and Order 2, edited by Jarich M, Academic Press, 1989.

More information about the diagrams package can be found from the home page Haskell diagrams

by readerunner at July 31, 2023 09:05 AM

Matt Parsons

Yamaha vs NS Design Electric Cellos

I’ve been learning cello again recently. The omnipresent advice for novices is to rent a cello from a reputable violin shop, which is great - my cello would be $2,700 new, but I’m renting it for $50/mo. And I can tell I’ll outgrow it, and I don’t know how to properly evaluate an acoustic cello for great suitability yet.

I’ve got a practice mute, but it’s still rather loud - I can’t practice at night or early in the morning without waking someone up in my house. This led me to look at electric cellos for practice. There are essentially three options:

  • Shitty Cecilio electrics
  • Yamaha silent cellos
  • NS Design electric cellos

I had the opportunity to visit Luther Strings this past weekend to try the Yamaha and NS Design cellos.

Avoid Cecilio

First of all, the prices on the Yamaha and NS Design are high - you may be tempted to shop elsewhere for your first instrument. The Cecilio seems like a good deal at $530, but it isn’t. A friend of mine purchased one in college (budgets, hey). The electronics are awful - bad sound, noisy, connections are inconsistent and cut out. I lined it with tin foil and resoldered most of the joints, which helped. It wouldn’t hold tune, so I installed geared bass tuners. The bridge needs to be shorter and the fingerboard needs to be planed, which I can’t reasonably DIY and the luthiers want $100s to do it. The strings are uncomfortable, too, and they don’t sound good. It’ll cost about as much to get the instrument in usable condition as it does to buy it.

You could start on this, but you’d quickly be dismayed at the difficulty in pressing the strings down, the noise from the electronics, and the impossibility of drawing a good tone from it. You’re far better off saving more money to buy a nicer one, or seeing if anyone nearby can rent one of these - a string shop that carries electrics may be willing to do that.

Yamaha

Yamaha makes a range of silent practice cellos. They happen to be electric, but the clear intended purpose is for silent practice. They have body shapes that make it feel like a traditional cello - there’s a shoulder where you’d expect to go into thumb position, a frame for your legs, and a metal bit that rests on your chest. The electronics are all active, and have an A/C adapter - they have headphone outputs and reverb built-in. They also have a hookup for an auxiliary input, which would allow you to plug an MP3 player and listen to a backing track while you play.

The cheapest option is the Yamaha SVC-50 at $2,100.

NS Design

NS Design makes a range of electric cellos. They happen to be quiet, but the clear intended purpose is for playing electric cello. The shape is highly non-traditional - the neck is unobstructed throughout, and there’s nothing but a stick on the cello itself. You can play with an included adjustable tripod to find the right position, or you can purchase a frame/end-pin mount, or a strap system for standing upright. On the low end, the electronics are simple - a passive pickup, a switch for arco or pizzicato playing, a volume know, and a tone knob - much like you’d find on an electric guitar or bass.

The cheapest option is the NS Design WAV4 at $1,259 (though Shar Music has one for $999). While this is significantly cheaper than the Yamaha, you’ll probably want the cello endpin stand at $385, and maybe the frame strap system at $259, or the thumbstop for $116. The cost is now $1,759, and you still can’t plug headphones directly into the instrument - for that, you need to go up to the CR series, which are around $4,000 (but have active electronics and a headphone jack).

Comparison

These are two pretty different items. However, they have a lot of overlap - and so deciding between them may be a challenge.

Cost

The NS Design WAV4 has a much lower entry price - at $999 from Shar, it’s a fantastic deal, and even the $1,259 from other retailers is good. However, the endpin stand is expensive and necessary to get the mobility and feel of a traditional cello, which would be an important purchase if the intent is to practice for traditional cello. The WAV4 also only has passive electronics and an instrument output - you need an amplifier of some sort to drive headphones.

The Yamaha is $2,100, but it is fully loaded. You buy it, and you’ll be able to practice cello silently with a backing track right now. You don’t need anything else.

Once you’ve bought the accessories you need to make these equivalent, the cost ends up around the same. This is defrayed a bit if you already have some of the amplifier/headphone gear for the NS cello from playing electric guitar.

Playing vs Traditional Cello

The Yamaha silent cello feels like a traditional cello. The chest stop feels right, and the two leg frames feel right. You can move the cello with your knees to get different string angles and to aide expressive playing. The shoulder of the cello indicates when you’d need to shift to thumb position on a traditional cello. Unfortunately, I found the fingerboard to be oddly shaped, and the strings were a bit high. It was far better than the Cecilio I’ve played, but I do still feel like a proper setup from a luthier would help dramatically. I’d say it feels like a $1,000 cello with $1,000 of electronics.

The NS Design doesn’t feel like a traditional cello at all. On the stock tripod, there’s no chest or leg support. The instrument sits in a fixed position. This doesn’t feel natural coming from an acoustic cello. I didn’t try the endpin stand, as the shop didn’t have it. The lack of shoulder means you have no indication when you’d need to shift into thumb position. There’s a single brass dot on the back of the neck that’s roughly where fourth position is.

The Yamaha clearly feels more like a traditional cello.

Playing as an Electric Cello

Yamaha’s intent is to allow you to pretend you are playing a real acoustic cello. Yamaha makes a comparable guitar - it’s their SLG200S silent acoustic guitar. The Yamaha silent cello doesn’t really feel like an instrument I’d want to play on. I definitely wouldn’t want to write music for it, or perform with. It’s for practice, and it lives in the shadow of the acoustic cello. You never want to play the Yamaha - you’d rather be playing your acoustic - but sometimes, hey, this is good enough, and better than nothing.

NS Design can sacrifice a lot of the engineering and design that the Yamaha needs for this in order to produce an instrument that stands alone. In the same way that a Fender Stratocaster or Gibson Les Paul are different instruments than an acoustic guitar, the NS Cello isn’t trying to be something that it isn’t. The tripod is sturdy and would work fine on stage. The frame strap allows you to stand and walk while playing - something the Yamaha simply cannot do, not even with The Block Strap, since the instrument is designed differently. The lack of a shoulder bout means you don’t have an indicator on when to shift - but that really just opens up the range of the upper end of the cello significantly. Just like how a classical guitar often has the shoulder bout on the 12th fret, but electric guitars can go up to 24 frets.

The NS Design cello simply feels much better to play. It’s easier to play double stops and sound notes. The neck has a nice feel to it. I can easily imagine playing this instrument for itself - writing music, performing with it, recording with it, etc.

Sound Quality

Electronics absolutely dominate the tone of an electric instrument.

The Yamaha cello is slightly noisy. You can hear a static hiss in the background. There’s no tone knob or onboard EQ, so the sound you get is what you get. In my opinion, it doesn’t sound great. However, it is a complete practice solution - and if you’re just practicing, the richness of tone doesn’t really matter. You can hear your technique just fine.

Now, I did not try the WAV4. I tried a 5 string CR, with active electronics - volume, switch, and an active EQ knob for bass and treble. The NS Design sounds great. The arco/pizzo switch yields some interesting tone combinations - while everything sounds great on the arco setting, the pizzicato setting brings out much more sustain, similar to a guitar. The electronics are quiet - no detectable noise. I found it easy to produce a tone that sounded good - you wouldn’t confuse it for an acoustic cello, but you wouldn’t think it sounded bad at all.

The WAV4 has the same pickup as the CR, but the electronics are passive. This means you only get a treble roll-off knob and a volume knob, plus the arco/pizzicato switch. I tend to prefer passive electronics anyway - both the Yamaha and NS Design cellos ran out of battery power during my trial, and produced some gnarly bad tone.

In my opinion, the NS Design wins here.

Overall fit/finish/feel

The NS Design CR cello feels fantastic. The craftsmanship is superb. Everything is sturdy and feels great. I think it looks nice, too - a quality wood finish and a pattern of dots on the fingerboard provide some visual interest for what is otherwise just a stick.

The Yamaha feels a bit like a toy. The aesthetics are pretty bare - it’s a black shape with some stuff sticking out. The tuners are kinda cheap looking. I didn’t get a great feeling for it.

Conclusion

Well, the Yamaha is clearly a better instrument for practicing traditional acoustic cello. No question about it. If all you care about is an all-in-one travel cello and the ability to practice without bothering anyone, then the Yamaha is the winner.

However, the NS Design has a lot of compelling points in it’s favor. It’s an instrument in-and-of-itself. It’s not trying to be an acoustic cello but quiet, and this allows it to have many advantages over the Yamaha. The sound is better than the Yamaha. In many ways, playing the NS Cello is even easier than an acoustic cello, and certainly much better feeling than the Yamaha.

Overall, I feel compelled by the advantages of the NS cello. I’ll need to invest in the endpin stand, and I may try to fashion a detachable shoulder bout so I know when to practice thumb position. I’ve been intending on getting a music studio going, which would satisfy the headphone amp problem, and I could also use any electric guitar amp.

I may regret the decision and decide that I don’t actually care about the electric cello features, and what I really did want was just a practice cello. But we’ll see!

July 31, 2023 12:00 AM

July 30, 2023

Chris Reade

Graphs, Kites and Darts

Graphs, Kites and Darts

Figure 1: Three Coloured Patches
Figure 1: Three Coloured Patches

Non-periodic tilings with Penrose’s kites and darts

(An updated version, since original posting on Jan 6, 2022)

We continue our investigation of the tilings using Haskell with Haskell Diagrams. What is new is the introduction of a planar graph representation. This allows us to define more operations on finite tilings, in particular forcing and composing.

Previously in Diagrams for Penrose Tiles we implemented tools to create and draw finite patches of Penrose kites and darts (such as the samples depicted in figure 1). The code for this and for the new graph representation and tools described here can be found on GitHub https://github.com/chrisreade/PenroseKiteDart.

To describe the tiling operations it is convenient to work with the half-tiles: LD (left dart), RD (right dart), LK (left kite), RK (right kite) using a polymorphic type HalfTile (defined in a module HalfTile)

data HalfTile rep 
 = LD rep | RD rep | LK rep | RK rep   deriving (Show,Eq)

Here rep is a type variable for a representation to be chosen. For drawing purposes, we chose two-dimensional vectors (V2 Double) and called these Pieces.

type Piece = HalfTile (V2 Double)

The vector represents the join edge of the half tile (see figure 2) and thus the scale and orientation are determined (the other tile edges are derived from this when producing a diagram).

Figure 2: The (half-tile) pieces showing join edges (dashed) and origin vertices (red dots)
Figure 2: The (half-tile) pieces showing join edges (dashed) and origin vertices (red dots)

Finite tilings or patches are then lists of located pieces.

type Patch = [Located Piece]

Both Piece and Patch are made transformable so rotate, and scale can be applied to both and translate can be applied to a Patch. (Translate has no effect on a Piece unless it is located.)

In Diagrams for Penrose Tiles we also discussed the rules for legal tilings and specifically the problem of incorrect tilings which are legal but get stuck so cannot continue to infinity. In order to create correct tilings we implemented the decompose operation on patches.

The vector representation that we use for drawing is not well suited to exploring properties of a patch such as neighbours of pieces. Knowing about neighbouring tiles is important for being able to reason about composition of patches (inverting a decomposition) and to find which pieces are determined (forced) on the boundary of a patch.

However, the polymorphic type HalfTile allows us to introduce our alternative graph representation alongside Pieces.

Tile Graphs

In the module Tgraph.Prelude, we have the new representation which treats half tiles as triangular faces of a planar graph – a TileFace – by specialising HalfTile with a triple of vertices (clockwise starting with the tile origin). For example

LD (1,3,4)       RK (6,4,3)
type Vertex = Int
type TileFace = HalfTile (Vertex,Vertex,Vertex)

When we need to refer to particular vertices from a TileFace we use originV (the first vertex – red dot in figure 2), oppV (the vertex at the opposite end of the join edge – dashed edge in figure 2), wingV (the remaining vertex not on the join edge).

originV, oppV, wingV :: TileFace -> Vertex

Tgraphs

The Tile Graphs implementation uses a type Tgraph which has a list of tile faces and a maximum vertex number.

data Tgraph = Tgraph { maxV  :: Vertex
                     , faces :: [TileFace]
                     }  deriving (Show)

For example, fool (short for a fool’s kite) is a Tgraph with 6 faces and 7 vertices, shown in figure 3.

fool = Tgraph { maxV = 7
              , faces = [RD (1,2,3),LD (1,3,4),RK (6,2,5)
                        ,LK (6,3,2),RK (6,4,3),LK (6,7,4)
                        ]
              }

(The fool is also called an ace in the literature)

Figure 3: fool
Figure 3: fool

With this representation we can investigate how composition works with whole patches. Figure 4 shows a twice decomposed sun on the left and a once decomposed sun on the right (both with vertex labels). In addition to decomposing the right Tgraph to form the left Tgraph, we can also compose the left Tgraph to get the right Tgraph.

Figure 4: sunD2 and sunD
Figure 4: sunD2 and sunD

After implementing composition, we also explore a force operation and an emplace operation to extend tilings.

There are some constraints we impose on Tgraphs.

  • No spurious vertices. The vertices of a Tgraph are the vertices that occur in the faces of the Tgraph (and maxV is the largest number occurring).

  • Connected. The collection of faces must be a single connected component.

  • No crossing boundaries. By this we mean that vertices on the boundary are incident with exactly two boundary edges. The boundary consists of the edges between the Tgraph faces and exterior region(s). This is important for adding faces.

  • Tile connected. Roughly, this means that if we collect the faces of a Tgraph by starting from any single face and then add faces which share an edge with those already collected, we get all the Tgraph faces. This is important for drawing purposes.

In fact, if a Tgraph is connected with no crossing boundaries, then it must be tile connected. (We could define tile connected to mean that the dual graph excluding exterior regions is connected.)

Figure 5 shows two excluded graphs which have crossing boundaries at 4 (left graph) and 13 (right graph). The left graph is still tile connected but the right is not tile connected (the two faces at the top right do not have an edge in common with the rest of the faces.)

Although we have allowed for Tgraphs with holes (multiple exterior regions), we note that such holes cannot be created by adding faces one at a time without creating a crossing boundary. They can be created by removing faces from a Tgraph without necessarily creating a crossing boundary.

Important We are using face as an abbreviation for half-tile face of a Tgraph here, and we do not count the exterior of a patch of faces to be a face. The exterior can also be disconnected when we have holes in a patch of faces and the holes are not counted as faces either. In graph theory, the term face would generally include these other regions, but we will call them exterior regions rather than faces.

Figure 5: A tile-connected graph with crossing boundaries at 4, and a non tile-connected graph
Figure 5: A tile-connected graph with crossing boundaries at 4, and a non tile-connected graph

In addition to the constructor Tgraph we also use

checkedTgraph:: [TileFace] -> Tgraph

which creates a Tgraph from a list of faces, but also performs checks on the required properties of Tgraphs. We can then remove or select faces from a Tgraph and then use checkedTgraph to ensure the resulting Tgraph still satisfies the required properties.

selectFaces, removeFaces  :: [TileFace] -> Tgraph -> Tgraph
selectFaces fcs g = checkedTgraph (faces g `intersect` fcs)
removeFaces fcs g = checkedTgraph (faces g \\ fcs)

Edges and Directed Edges

We do not explicitly record edges as part of a Tgraph, but calculate them as needed. Implicitly we are requiring

  • No spurious edges. The edges of a Tgraph are the edges of the faces of the Tgraph.

To represent edges, a pair of vertices (a,b) is regarded as a directed edge from a to b. A list of such pairs will usually be regarded as a directed edge list. In the special case that the list is symmetrically closed [(b,a) is in the list whenever (a,b) is in the list] we will refer to this as an edge list rather than a directed edge list.

The following functions on TileFaces all produce directed edges (going clockwise round a face).

type Dedge = (Vertex,Vertex)

joinE  :: TileFace -> (Vertex,Vertex)
  -- join edge - dashed in figure 2
shortE :: TileFace -> (Vertex,Vertex)
  -- the short edge which is not a join edge
longE  :: TileFace -> (Vertex,Vertex)
  -- the long edge which is not a join edge
faceDedges :: TileFace -> [(Vertex,Vertex)]
  -- all three directed edges clockwise from origin

For the whole Tgraph, we often want a list of all the directed edges of all the faces.

graphDedges :: Tgraph -> [(Vertex,Vertex)]
graphDedges g = concatMap faceDedges (faces g)

Because our graphs represent tilings they are planar (can be embedded in a plane) so we know that at most two faces can share an edge and they will have opposite directions of the edge. No two faces can have the same directed edge. So from graphDedges g we can easily calculate internal edges (edges shared by 2 faces) and boundary directed edges (directed edges round the external regions).

internalEdges, boundaryDedges :: Tgraph -> [(Vertex,Vertex)]

The internal edges of g are those edges which occur in both directions in graphDedges g. The boundary directed edges of g are the missing reverse directions in graphDedges g.

We also refer to all the long edges of a Tgraph (including kite join edges) as phiEdges (both directions of these edges).

phiEdges :: Tgraph -> [(Vertex, Vertex)]

This is so named because, when drawn, these long edges are phi times the length of the short edges (phi being the golden ratio which is approximately 1.618).

Drawing Tgraphs (Patches and VPatches)

The module Tgraph.Convert contains functions to convert a Tgraph to our previous vector representation (Patch) defined in TileLib so we can use the existing tools to produce diagrams.

However, it is convenient to have an intermediate stage (a VPatch = Vertex Patch) which contains both faces and calculated vertex locations (a finite map from vertices to locations). This allows vertex labels to be drawn and for faces to be identified and retained/excluded after the location information is calculated.

data VPatch = VPatch { vLocs :: VertexLocMap
                     , vpFaces::[TileFace]
                     } deriving Show

The conversion functions include

makeVP   :: Tgraph -> VPatch
dropLabels :: VPatch -> Patch -- discards vertex information

For drawing purposes we introduced a class Drawable which has a means to create a diagram when given a function to draw Pieces.

class Drawable a where
  drawWith :: (Piece -> Diagram B) -> a -> Diagram B

This allows us to make Patch, VPatch and Tgraph instances of Drawable, and we can define special cases for the most frequently used drawing tools.

draw :: Drawable a => a -> Diagram B
draw = drawWith drawPiece

drawj :: Drawable a => a -> Diagram B
drawj = drawWith dashjPiece

We also need to be able to create diagrams with vertex labels, so we also use

class DrawableLabelled a where
  drawLabelledWith :: (Piece -> Diagram B) -> a -> Diagram B

Both VPatch and Tgraph are made instances (but not Patch as this no longer has vertex information). The most common drawing cases with labelling are then

drawLabelled :: DrawableLabelled a => a -> Diagram B
drawLabelled = drawLabelledWith drawPiece

drawjLabelled :: DrawableLabelled a => a -> Diagram B
drawjLabelled = drawLabelledWith dashjPiece

One consequence of using abstract graphs is that there is no unique predefined way to orient or scale or position the VPatch (and Patch) arising from a Tgraph representation. Our implementation selects a particular join edge and aligns it along the x-axis (unit length for a dart, philength for a kite) and tile-connectedness ensures the rest of the VPatch (and Patch) can be calculated from this.

We also have functions to re-orient a VPatch and lists of VPatchs using chosen pairs of vertices. [Simply doing rotations on the final diagrams can cause problems if these include vertex labels. We do not, in general, want to rotate the labels – so we need to orient the VPatch before converting to a diagram]

Decomposing Graphs

We previously implemented decomposition for patches which splits each half-tile into two or three smaller scale half-tiles.

decompPatch :: Patch -> Patch

We now have a Tgraph version of decomposition in the module Tgraphs:

decompose :: Tgraph -> Tgraph

Graph decomposition is particularly simple. We start by introducing one new vertex for each long edge (the phiEdges) of the Tgraph. We then build the new faces from each old face using the new vertices.

As a running example we take fool (mentioned above) and its decomposition foolD

*Main> foolD = decomposeG fool

*Main> foolD
Tgraph { maxV = 14
       , faces = [LK (1,8,3),RD (2,3,8),RK (1,3,9)
                 ,LD (4,9,3),RK (5,13,2),LK (5,10,13)
                 ,RD (6,13,10),LK (3,2,13),RK (3,13,11)
                 ,LD (6,11,13),RK (3,14,4),LK (3,11,14)
                 ,RD (6,14,11),LK (7,4,14),RK (7,14,12)
                 ,LD (6,12,14)
                 ]
       }

which are best seen together (fool followed by foolD) in figure 6.

Figure 6: fool and foolD (= decomposeG fool)
Figure 6: fool and foolD (= decomposeG fool)

Composing Tgraphs, and Unknowns

Composing is meant to be an inverse to decomposing, and one of the main reasons for introducing our graph representation. In the literature, decomposition and composition are defined for infinite tilings and in that context they are unique inverses to each other. For finite patches, however, we will see that composition is not always uniquely determined.

In figure 7 (Two Levels) we have emphasised the larger scale faces on top of the smaller scale faces.

Figure 7: Two Levels
Figure 7: Two Levels

How do we identify the composed tiles? We start by classifying vertices which are at the wing tips of the (smaller) darts as these determine how things compose. In the interior of a graph/patch (e.g in figure 7), a dart wing tip always coincides with a second dart wing tip, and either

  1. the 2 dart halves share a long edge. The shared wing tip is then classified as a largeKiteCentre and is at the centre of a larger kite. (See left vertex type in figure 8), or
  2. the 2 dart halves touch at their wing tips without sharing an edge. This shared wing tip is classified as a largeDartBase and is the base of a larger dart. (See right vertex type in figure 8)
Figure 8: largeKiteCentre (left) and largeDartBase (right)
Figure 8: largeKiteCentre (left) and largeDartBase (right)

[We also call these (respectively) a deuce vertex type and a jack vertex type later in figure 10]

Around the boundary of a Tgraph, the dart wing tips may not share with a second dart. Sometimes the wing tip has to be classified as unknown but often it can be decided by looking at neighbouring tiles. In this example of a four times decomposed sun (sunD4), it is possible to classify all the dart wing tips as a largeKiteCentre or a largeDartBase so there are no unknowns.

If there are no unknowns, then we have a function to produce the unique composed Tgraph.

compose:: Tgraph -> Tgraph

Any correct decomposed Tgraph without unknowns will necessarily compose back to its original. This makes compose a left inverse to decompose provided there are no unknowns.

For example, with an (n times) decomposed sun we will have no unknowns, so these will all compose back up to a sun after n applications of compose. For n=4 (sunD4 – the smaller scale shown in figure 7) the dart wing classification returns 70 largeKiteCentres, 45 largeDartBases, and no unknowns.

Similarly with the simpler foolD example, if we classsify the dart wings we get

largeKiteCentres = [14,13]
largeDartBases = [3]
unknowns = []

In foolD (the right hand Tgraph in figure 6), nodes 14 and 13 are new kite centres and node 3 is a new dart base. There are no unknowns so we can use compose safely

*Main> compose foolD
Tgraph { maxV = 7
       , faces = [RD (1,2,3),LD (1,3,4),RK (6,2,5)
                 ,RK (6,4,3),LK (6,3,2),LK (6,7,4)
                 ]
       }

which reproduces the original fool (left hand Tgraph in figure 6).

However, if we now check out unknowns for fool we get

largeKiteCentres = []
largeDartBases = []
unknowns = [4,2]    

So both nodes 2 and 4 are unknowns. It had looked as though fool would simply compose into two half kites back-to-back (sharing their long edge not their join), but the unknowns show there are other possible choices. Each unknown could become a largeKiteCentre or a largeDartBase.

The question is then what to do with unknowns.

Partial Compositions

In fact our compose resolves two problems when dealing with finite patches. One is the unknowns and the other is critical missing faces needed to make up a new face (e.g the absence of any half dart).

It is implemented using an intermediary function for partial composition

partCompose:: Tgraph -> ([TileFace],Tgraph) 

partCompose will compose everything that is uniquely determined, but will leave out faces round the boundary which cannot be determined or cannot be included in a new face. It returns the faces of the argument Tgraph that were not used, along with the composed Tgraph.

Figure 9 shows the result of partCompose applied to two graphs. [These are force kiteD3 and force dartD3 on the left. Force is described later]. In each case, the excluded faces of the starting Tgraph are shown in pale green, overlaid by the composed Tgraph on the right.

Figure 9: partCompose for two graphs (force kiteD3 top row and force dartD3 bottom row)
Figure 9: partCompose for two graphs (force kiteD3 top row and force dartD3 bottom row)

Then compose is simply defined to keep the composed faces and ignore the unused faces produced by partCompose.

compose:: Tgraph -> Tgraph
compose = snd . partCompose 

This approach avoids making a decision about unknowns when composing, but it may lose some information by throwing away the uncomposed faces.

For correct Tgraphs g, if decompose g has no unknowns, then compose is a left inverse to decompose. However, if we take g to be two kite halves sharing their long edge (not their join edge), then these decompose to fool which produces an empty Tgraph when recomposed. Thus we do not have g = compose (decompose g) in general. On the other hand we do have g = compose (decompose g) for correct whole-tile Tgraphs g (whole-tile means all half-tiles of g have their matching half-tile on their join edge in g)

Later (figure 21) we show another exception to g = compose (decompose g) with an incorrect tiling.

We make use of

selectFacesVP    :: [TileFace] -> VPatch -> VPatch
removeFacesVP    :: [TileFace] -> VPatch -> VPatch
selectFacesGtoVP :: [TileFace] -> Tgraph -> VPatch
removeFacesGtoVP :: [TileFace] -> Tgraph -> VPatch

for creating VPatches from selected tile faces of a Tgraph or VPatch. This allows us to represent and draw a subgraph which need not be connected nor satisfy the no crossing boundaries property provided the Tgraph it was derived from had these properties.

Forcing

When building up a tiling, following the rules, there is often no choice about what tile can be added alongside certain tile edges at the boundary. Such additions are forced by the existing patch of tiles and the rules. For example, if a half tile has its join edge on the boundary, the unique mirror half tile is the only possibility for adding a face to that edge. Similarly, the short edge of a left (respectively, right) dart can only be matched with the short edge of a right (respectively, left) kite. We also make use of the fact that only 7 types of vertex can appear in (the interior of) a patch, so on a boundary vertex we sometimes have enough of the faces to determine the vertex type. These are given the following names in the literature (shown in figure 10): sun, star, jack (=largeDartBase), queen, king, ace, deuce (=largeKiteCentre).

Figure 10: Vertex types
Figure 10: Vertex types

The function

force :: Tgraph -> Tgraph

will add some faces on the boundary that are forced (i.e new faces where there is exactly one possible choice). For example:

  • When a join edge is on the boundary – add the missing half tile to make a whole tile.
  • When a half dart has its short edge on the boundary – add the half kite that must be on the short edge.
  • When a vertex is both a dart origin and a kite wing (it must be a queen or king vertex) – if there is a boundary short edge of a kite half at the vertex, add another kite half sharing the short edge, (this converts 1 kite to 2 and 3 kites to 4 in combination with the first rule).
  • When two half kites share a short edge their common oppV vertex must be a deuce vertex – add any missing half darts needed to complete the vertex.

Figure 11 shows foolDminus (which is foolD with 3 faces removed) on the left and the result of forcing, ie force foolDminus on the right which is the same Tgraph we get from force foolD (modulo vertex renumbering).

foolDminus = 
    removeFaces [RD(6,14,11), LD(6,12,14), RK(5,13,2)] foolD
Figure 11: foolDminus and force foolDminus = force foolD
Figure 11: foolDminus and force foolDminus = force foolD

Figures 12, 13 and 14 illustrate the result of forcing a 5-times decomposed kite, a 5-times decomposed dart, and a 5-times decomposed sun (respectively). The first two figures reproduce diagrams from an article by Roger Penrose illustrating the extent of influence of tiles round a decomposed kite and dart. [Penrose R Tilings and quasi-crystals; a non-local growth problem? in Aperiodicity and Order 2, edited by Jarich M, Academic Press, 1989. (fig 14)].

Figure 12: force kiteD5 with kiteD5 shown in red
Figure 12: force kiteD5 with kiteD5 shown in red
Figure 13: force dartD5 with dartD5 shown in red
Figure 13: force dartD5 with dartD5 shown in red
Figure 14: force sunD5 with sunD5 shown in red
Figure 14: force sunD5 with sunD5 shown in red

In figure 15, the bottom row shows successive decompositions of a dart (dashed blue arrows from right to left), so applying compose to each dart will go back (green arrows from left to right). The black vertical arrows are force. The solid blue arrows from right to left are (force . decompose) being applied to the successive forced Tgraphs. The green arrows in the reverse direction are compose again and the intermediate (partCompose) figures are shown in the top row with the remainder faces in pale green.

Figure 15: Arrows: black = force, green = composeG, solid blue = (force . decomposeG)
Figure 15: Arrows: black = force, green = composeG, solid blue = (force . decomposeG)

Figure 16 shows the forced graphs of the seven vertex types (with the starting Tgraphs in red) along with a kite (top right).

Figure 16: Relating the forced seven vertex types and the kite
Figure 16: Relating the forced seven vertex types and the kite

These are related to each other as shown in the columns. Each Tgraph composes to the one above (an empty Tgraph for the ones in the top row) and the Tgraph below is its forced decomposition. [The rows have been scaled differently to make the vertex types easier to see.]

Adding Faces to a Tgraph

This is technically tricky because we need to discover what vertices (and implicitly edges) need to be newly created and which ones already exist in the Tgraph. This goes beyond a simple graph operation and requires use of the geometry of the faces. We have chosen not to do a full conversion to vectors to work out all the geometry, but instead we introduce a local representation of relative directions of edges at a vertex allowing a simple equality test.

Edge directions

All directions are integer multiples of 1/10th turn (mod 10) so we use these integers for face internal angles and boundary external angles. The face adding process always adds to the right of a given directed edge (a,b) which must be a boundary directed edge. [Adding to the left of an edge (a,b) would mean that (b,a) will be the boundary direction and so we are really adding to the right of (b,a)]. Face adding looks to see if either of the two other edges already exist in the Tgraph by considering the end points a and b to which the new face is to be added, and checking angles.

This allows an edge in a particular sought direction to be discovered. If it is not found it is assumed not to exist. However, the search will be undermined if there are crossing boundaries. In such a case there will be more than two boundary directed edges at the vertex and there is no unique external angle.

Establishing the no crossing boundaries property ensures these failures cannot occur. We can easily check this property for newly created Tgraphs (with checkedTgraph) and the face adding operations cannot create crossing boundaries.

Touching Vertices and Crossing Boundaries

When a new face to be added on (a,b) has neither of the other two edges already in the Tgraph, the third vertex needs to be created. However it could already exist in the Tgraph – it is not on an edge coming from a or b but from another non-local part of the Tgraph. We call this a touching vertex. If we simply added a new vertex without checking for a clash this would create a non-sensible Tgraph. However, if we do check and find an existing vertex, we still cannot add the face using this because it would create a crossing boundary.

Our version of forcing prevents face additions that would create a touching vertex/crossing boundary by calculating the positions of boundary vertices.

No conflicting edges

There is a final (simple) check when adding a new face, to prevent a long edge (phiEdge) sharing with a short edge. This can arise if we force an incorrect Tgraph (as we will see later).

Implementing Forcing

Our order of forcing prioritises updates (face additions) which do not introduce a new vertex. Such safe updates are easy to recognise and they do not require a touching vertex check. Surprisingly, this pretty much removes the problem of touching vertices altogether.

As an illustration, consider foolDMinus again on the left of figure 11. Adding the left dart onto edge (12,14) is not a safe addition (and would create a crossing boundary at 6). However, adding the right dart RD(6,14,11) is safe and creates the new edge (6,14) which then makes the left dart addition safe. In fact it takes some contrivance to come up with a Tgraph with an update that could fail the check during forcing when safe cases are always done first. Figure 17 shows such a contrived Tgraph formed by removing the faces shown in green from a twice decomposed sun on the left. The forced result is shown on the right. When there are no safe cases, we need to try an unsafe one. The four green faces at the bottom are blocked by the touching vertex check. This leaves any one of 9 half-kites at the centre which would pass the check. But after just one of these is added, the check is not needed again. There is always a safe addition to be done at each step until all the green faces are added.

Figure 17: A contrived example requiring a touching vertex check
Figure 17: A contrived example requiring a touching vertex check

Boundary information

The implementation of forcing has been made more efficient by calculating some boundary information in advance. This boundary information uses a type BoundaryState

data BoundaryState
  = BoundaryState
    { boundary    :: [Dedge]
    , bvFacesMap  :: Mapping Vertex [TileFace]
    , bvLocMap    :: Mapping Vertex (Point V2 Double)
    , allFaces    :: [TileFace]
    , allVertices :: [Vertex]
    , nextVertex  :: Vertex
    } deriving (Show)

This records the boundary directed edges (boundary) plus a mapping of the boundary vertices to their incident faces (bvFacesMap) plus a mapping of the boundary vertices to their positions (bvLocMap). It also keeps track of all the faces and vertices. The boundary information is easily incremented for each face addition without being recalculated from scratch, and a final Tgraph with all the new faces is easily recovered from the boundary information when there are no more updates.

makeBoundaryState  :: Tgraph -> BoundaryState
recoverGraph  :: BoundaryState -> Tgraph

The saving that comes from using boundary information lies in efficient incremental changes to the boundary information and, of course, in avoiding the need to consider internal faces. As a further optimisation we keep track of updates in a mapping from boundary directed edges to updates, and supply a list of affected edges after an update so the update calculator (update generator) need only revise these. The boundary and mapping are combined in a ForceState.

type UpdateMap = Mapping Dedge Update
type UpdateGenerator = BoundaryState -> [Dedge] -> UpdateMap
data ForceState = ForceState 
       { boundaryState:: BoundaryState
       , updateMap:: UpdateMap 
       }

Forcing then involves using a specific update generator (allUGenerator) and initialising the state, then using the recursive forceAll which keeps doing updates until there are no more, before recovering the final Tgraph.

force:: Tgraph -> Tgraph
force = forceWith allUGenerator

forceWith:: UpdateGenerator -> Tgraph -> Tgraph
forceWith uGen = recoverGraph . boundaryState . 
                 forceAll uGen . initForceState uGen

forceAll :: UpdateGenerator -> ForceState -> ForceState
initForceState :: UpdateGenerator -> Tgraph -> ForceState

In addition to force we can easily define

wholeTiles:: Tgraph -> Tgraph
wholeTiles = forceWith wholeTileUpdates 

which just uses the first forcing rule to make sure every half-tile has a matching other half.

We also have a version of force which counts to a specific number of face additions.

stepForce :: Int -> ForceState -> ForceState

This proved essential in uncovering problems of accumulated inaccuracy in calculating boundary positions (now fixed).

Some Other Experiments

Below we describe results of some experiments using the tools introduced above. Specifically: emplacements, sub-Tgraphs, incorrect tilings, and composition choices.

Emplacements

The finite number of rules used in forcing are based on local boundary vertex and edge information only. We thought we may be able to improve on this by considering a composition and forcing at the next level up before decomposing and forcing again. This thus considers slightly broader local information. In fact we can iterate this process to all the higher levels of composition. Some Tgraphs produce an empty Tgraph when composed so we can regard those as maximal compositions. For example compose fool produces an empty Tgraph.

The idea was to take an arbitrary Tgraph and apply (compose . force) repeatedly to find its maximally composed (non-empty) Tgraph, before applying (force . decompose) repeatedly back down to the starting level (so the same number of decompositions as compositions).

We called the function emplace, and called the result the emplacement of the starting Tgraph as it shows a region of influence around the starting Tgraph.

With earlier versions of forcing when we had fewer rules, emplace g often extended force g for a Tgraph g. This allowed the identification of some new rules. However, since adding the new rules we have not found Tgraphs where the result of force had fewer faces than the result of emplace.

[As an important update, we have now found examples where the result of force strictly includes the result of emplace (modulo vertex renumbering).

Sub-Tgraphs

In figure 18 on the left we have a four times decomposed dart dartD4 followed by two sub-Tgraphs brokenDart and badlyBrokenDart which are constructed by removing faces from dartD4 (but retaining the connectedness condition and the no crossing boundaries condition). These all produce the same forced result (depicted middle row left in figure 15).

Figure 18: dartD4, brokenDart, badlyBrokenDart
Figure 18: dartD4, brokenDart, badlyBrokenDart

However, if we do compositions without forcing first we find badlyBrokenDart fails because it produces a graph with crossing boundaries after 3 compositions. So compose on its own is not always safe, where safe means guaranteed to produce a valid Tgraph from a valid correct Tgraph.

In other experiments we tried force on Tgraphs with holes and on incomplete boundaries around a potential hole. For example, we have taken the boundary faces of a forced, 5 times decomposed dart, then removed a few more faces to make a gap (which is still a valid Tgraph). This is shown at the top in figure 19. The result of forcing reconstructs the complete original forced graph. The bottom figure shows an intermediate stage after 2200 face additions. The gap cannot be closed off to make a hole as this would create a crossing boundary, but the channel does get filled and eventually closes the gap without creating a hole.

Figure 19: Forcing boundary faces with a gap (after 2200 steps)
Figure 19: Forcing boundary faces with a gap (after 2200 steps)

Incorrect Tilings

When we say a Tgraph g is correct (respectively: incorrect), we mean g represents a correct tiling (respectively: incorrect tiling). A simple example of an incorrect Tgraph is a kite with a dart on each side (referred to as a mistake by Penrose) shown on the left of figure 20.

*Main> mistake
Tgraph { maxV = 8
       , faces = [RK (1,2,4),LK (1,3,2),RD (3,1,5)
                 ,LD (4,6,1),LD (3,5,7),RD (4,8,6)
                 ]
       }

If we try to force (or emplace) this Tgraph it produces an error in construction which is detected by the test for conflicting edge types (a phiEdge sharing with a non-phiEdge).

*Main> force mistake
... *** Exception: doUpdate:(incorrect tiling)
Conflicting new face RK (11,1,6)
with neighbouring faces
[RK (9,1,11),LK (9,5,1),RK (1,2,4),LK (1,3,2),RD (3,1,5),LD (4,6,1),RD (4,8,6)]
in boundary
BoundaryState ...

In figure 20 on the right, we see that after successfully constructing the two whole kites on the top dart short edges, there is an attempt to add an RK on edge (1,6). The process finds an existing edge (1,11) in the correct direction for one of the new edges so tries to add the erroneous RK (11,1,6) which fails a noConflicts test.

Figure 20: An incorrect graph (mistake), and the point at which force mistake fails
Figure 20: An incorrect graph (mistake), and the point at which force mistake fails

So it is certainly true that incorrect Tgraphs may fail on forcing, but forcing cannot create an incorrect Tgraph from a correct Tgraph.

If we apply decompose to mistake it produces another incorrect Tgraph (which is similarly detected if we apply force), but will nevertheless still compose back to mistake if we do not try to force.

Interestingly, though, the incorrectness of a Tgraph is not always preserved by decompose. If we start with mistake1 which is mistake with just two of the half darts (and also incorrect) we still get a similar failure on forcing, but decompose mistake1 is no longer incorrect. If we apply compose to the result or force then compose the mistake is thrown away to leave just a kite (see figure 21). This is an example where compose is not a left inverse to either decompose or (force . decompose).

Figure 21: mistake1 with its decomposition, forced decomposition, and recomposed.
Figure 21: mistake1 with its decomposition, forced decomposition, and recomposed.

Composing with Choices

We know that unknowns indicate possible choices (although some choices may lead to incorrect Tgraphs). As an experiment we introduce

makeChoices :: Tgraph -> [Tgraph]

which produces 2^n alternatives for the 2 choices of each of n unknowns (prior to composing). This uses forceLDB which forces an unknown to be a largeDartBase by adding an appropriate joined half dart at the node, and forceLKC which forces an unknown to be a largeKiteCentre by adding a half dart and a whole kite at the node (making up the 3 pieces for a larger half kite).

Figure 22 illustrates the four choices for composing fool this way. The top row has the four choices of makeChoices fool (with the fool shown embeded in red in each case). The bottom row shows the result of applying compose to each choice.

Figure 22: makeChoices fool (top row) and compose of each choice (bottom row)
Figure 22: makeChoices fool (top row) and compose of each choice (bottom row)

In this case, all four compositions are correct tilings. The problem is that, in general, some of the choices may lead to incorrect tilings. More specifically, a choice of one unknown can determine what other unknowns have to become with constraints such as

  • a and b have to be opposite choices
  • a and b have to be the same choice
  • a and b cannot both be largeKiteCentres
  • a and b cannot both be largeDartBases

This analysis of constraints on unknowns is not trivial. The potential exponential results from choices suggests we should compose and force as much as possible and only consider unknowns of a maximal Tgraph.

For calculating the emplacement of a Tgraph, we first find the forced maximal Tgraph before decomposing. We could also consider using makeChoices at this top step when there are unknowns, i.e a version of emplace which produces these alternative results (emplaceChoices)

The result of emplaceChoices is illustrated for foolD in figure 23. The first force and composition is unique producing the fool level at which point we get 4 alternatives each of which compose further as previously illustrated in figure 22. Each of these are forced, then decomposed and forced, decomposed and forced again back down to the starting level. In figure 23 foolD is overlaid on the 4 alternative results. What they have in common is (as you might expect) emplace foolD which equals force foolD and is the graph shown on the right of figure 11.

Figure 23: emplaceChoices foolD
Figure 23: emplaceChoices foolD

Future Work

I am collaborating with Stephen Huggett who suggested the use of graphs for exploring properties of the tilings. We now have some tools to experiment with but we would also like to complete some formalisation and proofs.

It would also be good to establish whether it is true that g is incorrect iff force g fails.

We have other conjectures relating to subgraph ordering of Tgraphs and Galois connections to explore.

by readerunner at July 30, 2023 05:01 PM

Graphs, Kites and Darts – Empires and SuperForce

We have been exploring properties of Penrose’s aperiodic tilings with kites and darts using Haskell.

Previously in Diagrams for Penrose tiles we implemented tools to draw finite tilings using Haskell diagrams. There we also noted that legal tilings are only correct tilings if they can be continued infinitely and are incorrect otherwise. In Graphs, Kites and Darts we introduced a graph representation for finite tilings (Tgraphs) which enabled us to implement operations that use neighbouring tile information. In particular we implemented a force operation to extend a Tgraph on any boundary edge where there is a unique choice for adding a tile.

In this note we find a limitation of force, show a way to improve on it (superForce), and introduce boundary coverings which are used to implement superForce and calculate empires.

Properties of Tgraphs

A Tgraph is a collection of half-tile faces representing a legal tiling and a half-tile face is either an LD (left dart) , RD (right dart), LK (left kite), or RK (right kite) each with 3 vertices to form a triangle. Faces of the Tgraph which are not half-tile faces are considered external regions and those edges round the external regions are the boundary edges of the Tgraph. The half-tile faces in a Tgraph are required to be connected and locally tile-connected which means that there are exactly two boundary edges at any boundary vertex (no crossing boundaries).

As an example Tgraph we show kingGraph (the three darts and two kites round a king vertex), where

  kingGraph = makeTgraph 
    [LD (1,2,3),RD (1,11,2),LD (1,4,5),RD (1,3,4),LD (1,10,11)
    ,RD (1,9,10),LK (9,1,7),RK (9,7,8),RK (5,7,1),LK (5,6,7)
    ]

This is drawn in figure 1 using

  hsep 1 [drawjLabelled kingGraph, draw kingGraph]

which shows vertex labels and dashed join edges (left) and without labels and join edges (right). (hsep 1 provides a horizontal seperator of unit length.)

Figure 1: kingGraph with labels and dashed join edges (left) and without (right).
Figure 1: kingGraph with labels and dashed join edges (left) and without (right).

Properties of forcing

We know there are at most two legal possibilities for adding a half-tile on a boundary edge of a Tgraph. If there are zero legal possibilities for adding a half-tile to some boundary edge, we have a stuck tiling/incorrect Tgraph.

Forcing deals with all cases where there is exactly one legal possibility for extending on a boundary edge. That means forcing either fails at some stage with a stuck Tgraph (indicating the starting Tgraph was incorrect) or it enlarges the starting Tgraph until every boundary edge has exactly two legal possibilities for adding a half-tile so a choice would need to be made to grow the Tgraph any further.

Figure 2 shows force kingGraph with kingGraph shown red.

Figure 2: force kingGraph with kingGraph shown red.
Figure 2: force kingGraph with kingGraph shown red.

If g is a correct Tgraph, then force g succeeds and the resulting Tgraph will be common to all infinite tilings that extend the finite tiling represented by g. However, we will see that force g is not a greatest lower bound of (infinite) tilings that extend g. Firstly, what is common to all extensions of g may not be a connected collection of tiles. This leads to the concept of empires which we discuss later. Secondly, even if we only consider the connected common region containing g, we will see that we need to go beyond force g to find this, leading to an operation we call superForce.

Our empire and superForce operations are implemented using boundary coverings which we introduce next.

Boundary edge covering

Given a successfully forced Tgraph fg, a boundary edge covering of fg is a list of successfully forced extensions of fg such that

  1. no boundary edge of fg remains on the boundary in each extension, and
  2. the list takes into account all legal choices for extending on each boundary edge of fg.

[Technically this is a covering of the choices round the boundary, but each extension is also a cover of the boundary edges.] Figure 3 shows a boundary edge covering for a forced kingGraph (force kingGraph is shown red in each extension).

Figure 3: A boundary edge covering of force kingGraph.
Figure 3: A boundary edge covering of force kingGraph.

In practice, we do not need to explore both choices for every boundary edge of fg. When one choice is made, it may force choices for other boundary edges, reducing the number of boundary edges we need to consider further.

The main function is boundaryECovering working on a BoundaryState (which is a Tgraph with extra boundary information). It uses covers which works on a list of extensions each paired with the remaining set of the original boundary edges not yet covered. (Initially covers is given a singleton list with the starting boundary state and the full set of boundary edges to be covered.) For each extension in the list, if its uncovered set is empty, that extension is a completed cover. Otherwise covers replaces the extension with further extensions. It picks the (lowest numbered) boundary edge in the uncovered set, tries extending with a half-dart and with a half-kite on that edge, forcing in each case, then pairs each result with its set of remaining uncovered boundary edges before adding the resulting extensions back at the front of the list to be processed again. If one of the choices for a dart/kite leads to an incorrect tiling (a stuck tiling) when forced, that choice is dropped (provided the other choice succeeds). The final list returned consists of all the completed covers.

  boundaryECovering:: BoundaryState -> [BoundaryState]
  boundaryECovering bs = covers [(bs, Set.fromList (boundary bs))]

  covers:: [(BoundaryState, Set.Set Dedge)] -> [BoundaryState]
  covers [] = []
  covers ((bs,es):opens) 
    | Set.null es = bs:covers opens -- bs is complete
    | otherwise   = covers (newcases ++ opens)
       where (de,des) = Set.deleteFindMin es
             newcases = fmap (\b -> (b, commonBdry des b))
                             (atLeastOne $ tryDartAndKite bs de)

Here we have used

  type Try a = Either String a
  tryDartAndKite:: BoundaryState -> Dedge -> [Try BoundaryState]
  atLeastOne    :: [Try a] -> [a]

We frequently use Try as a type for results of partial functions where we need to continue computation if there is a failure. For example we have a version of force (called tryForce) that returns a Try Tgraph so it does not fail by raising an error, but returns a result indicating either an explicit failure situation or a successful result with a final forced Tgraph. The function tryDartAndKite tries adding an appropriate half-dart and half-kite on a given boundary edge, then uses tryForceBoundary (a variant of tryForce which works with boundary states) on each result and returns a list of Try results. The list of Try results is converted with atLeastOne which collects the successful results but will raise an error when there are no successful results.

Boundary vertex covering

You may notice in figure 3 that the top right cover still has boundary vertices of kingGraph on the final boundary. We use a boundary vertex covering rather than a boundary edge covering if we want to exclude these cases. This involves picking a boundary edge that includes such a vertex and continuing the process of growing possible extensions until no boundary vertices of the original remain on the boundary.

Empires

A partial example of an empire was shown in a 1977 article by Martin Gardner 1. The full empire of a finite tiling would consist of the common faces of all the infinite extensions of the tiling. This will include at least the force of the tiling but it is not obviously finite. Here we confine ourselves to the empire in finite local regions.

For example, we can calculate a local empire for a given Tgraph g by finding the common faces of all the extensions in a boundary vertex covering of force g (which we call empire1 g).

This requires an efficient way to compare Tgraphs. We have implemented guided intersection and guided union operations which, when given a common edge starting point for two Tgraphs, proceed to compare the Tgraphs face by face and produce an appropriate relabelling of the second Tgraph to match the first Tgraph only in the overlap where they agree. These operations may also use geometric positioning information to deal with cases where the overlap is not just a single connected region. From these we can return a union as a single Tgraph when it exists, and an intersection as a list of common faces. Since the (guided) intersection of Tgraphs (the common faces) may not be connected, we do not have a resulting Tgraph. However we can arbitrarily pick one of the argument Tgraphs and emphasise which are the common faces in this example Tgraph.

Figure 4 (left) shows empire1 kingGraph where the starting kingGraph is shown in red. The grey-filled faces are the common faces from a boundary vertex covering. We can see that these are not all connected and that the force kingGraph from figure 2 corresponds to the connected set of grey-filled faces around and including the kingGraph in figure 4.

Figure 4: King's empire (level 1 and level 2).
Figure 4: King’s empire (level 1 and level 2).

We call this a level 1 empire because we only explored out as far as the first boundary covering. We could instead, find further boundary coverings for each of the extensions in a boundary covering. This grows larger extensions in which to find common faces. On the right of figure 4 is a level 2 empire (empire2 kingGraph) which finds the intersection of the combined boundary edge coverings of each extension in a boundary edge covering of force kingGraph. Obviously this process could be continued further but, in practice, it is too inefficient to go much further.

SuperForce

We might hope that (when not discovering an incorrect tiling), force g produces the maximal connected component containing g of the common faces of all infinite extensions of g. This is true for the kingGraph as noted in figure 4. However, this is not the case in general.

The problem is that forcing will not discover if one of the two legal choices for extending a resulting boundary edge always leads to an incorrect Tgraph. In such a situation, the other choice would be common to all infinite extensions.

We can use a boundary edge covering to reveal such cases, leading us to a superForce operation. For example, figure 5 shows a boundary edge covering for the forced Tgraph shown in red.

Figure 5: One choice cover.
Figure 5: One choice cover.

This example is particularly interesting because in every case, the leftmost end of the red forced Tgraph has a dart immediately extending it. Why is there no case extending one of the leftmost two red edges with a half-kite? The fact that such cases are missing from the boundary edge covering suggests they are not possible. Indeed we can check this by adding a half-kite to one of the edges and trying to force. This leads to a failure showing that we have an incorrect tiling. Figure 6 illustrates the Tgraph at the point that it is discovered to be stuck (at the bottom left) by forcing.

Figure 6: An incorrect extension.
Figure 6: An incorrect extension.

Our superForce operation starts by forcing a Tgraph. After a successful force, it creates a boundary edge covering for the forced Tgraph and checks to see if there is any boundary edge of the forced Tgraph for which each cover has the same choice. If so, that choice is made to extend the forced Tgraph and the process is repeated by applying superForce to the result. Otherwise, just the result of forcing is returned.

Figure 7 shows a chain of examples (rockets) where superForce has been used. In each case, the starting Tgraph is shown red, the additional faces added by forcing are shown black, and any further extension produced by superForce is shown in blue.

Figure 7: SuperForce rockets.
Figure 7: SuperForce rockets.

Coda

We still do not know if forcing decides that a Tgraph is correct/incorrect. Can we conclude that if force g succeeds then g (and force g) are correct? We found examples (rockets in figure 7) where force succeeds but one of the 2 legal choices for extending on a boundary edge leads to an incorrect Tgraph. If we find an example g where force g succeeds but both legal choices on a boundary edge lead to incorrect Tgraphs we will have a counter-example. If such a g exists then superForce g will raise an error. [The calculation of a boundary edge covering will call atLeastOne where both branches have led to failure for extending on an edge.]

This means that when superForce succeeds every resulting boundary edge has two legal extensions, neither of which will get stuck when forced.

I would like to thank Stephen Huggett who suggested the idea of using graphs to represent tilings and who is working with me on proof problems relating to the kite and dart tilings.

Reference [1] Martin Gardner (1977) MATHEMATICAL GAMES. Scientific American, 236(1), (pages 110 to 121). http://www.jstor.org/stable/24953856

by readerunner at July 30, 2023 02:30 PM

July 27, 2023

Tweag I/O

Building a Rust workspace with Bazel

The vast majority of the Rust projects are using Cargo as a build tool. Cargo is great when you are developing and packaging a single Rust library or application, but when it comes to a fast-growing and complex workspace, one could be attracted to the idea of using a more flexible and scalable build system. Here is a nice article elaborating on why Cargo should not be considered as a such a build system. But there are a handful of reasons to consider Bazel:

  • Bazel’s focus on hermeticity and aggressive caching allows us to improve median build and test times, especially for a single Pull Request against a relatively large codebase.
  • Remote caching and execution can significantly reduce the amount of Rust compilation done locally on developers’ machines.
  • The polyglot nature of Bazel allows expressing connections between Rust code and targets written in other languages in a much more simple and straightforward manner. Be it building Python packages from Rust code with PyO3, connecting JavaScript code with WASM compiled from Rust, or managing Rust crates incorporating FFI calls from a C-library; with Bazel you have a solid solution.

That’s all great, but how am I going to make my Cargo workspace use Bazel? To show this, I’m going to take an open source Rust project and guide you through the steps to migrate it to Bazel.

Ripgrep

I chose ripgrep, since it is well-known in the Rust community. The project is organized as a Cargo workspace consisting of several crates:

[[bin]]
bench = false
path = "crates/core/main.rs"
name = "rg"

[[test]]
name = "integration"
path = "tests/tests.rs"

[workspace]
members = [
  "crates/globset",
  "crates/grep",
  "crates/cli",
  "crates/matcher",
  "crates/pcre2",
  "crates/printer",
  "crates/regex",
  "crates/searcher",
  "crates/ignore",
]

Let’s see what it will take to build and test this workspace with Bazel.

Setting up a WORKSPACE

Luckily, there is a Bazel extension for building Rust projects: rules_rust. It supports handling Rust toolchains, building Rust libraries, binaries and proc_macro crates, running build.rs scripts, automatically converting Cargo dependencies to Bazel and a lot more.

First, we’ll create a Bazel workspace. If you are not familiar with Bazel and Bazel workspaces, we have an article on the Tweag blog covering this topic. So, let’s start with creating a WORKSPACE file and importing rules_rust:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

http_archive(
    name = "rules_rust",
    sha256 = "48e715be2368d79bc174efdb12f34acfc89abd7ebfcbffbc02568fcb9ad91536",
    urls = ["https://github.com/bazelbuild/rules_rust/releases/download/0.24.0/rules_rust-v0.24.0.tar.gz"],
)

load("@rules_rust//rust:repositories.bzl", "rules_rust_dependencies", "rust_register_toolchains")

rules_rust_dependencies()

rust_register_toolchains(
    edition = "2018",
)

Here we’ve specified the edition. Editions are used by the Rust team to perform changes which are backwards incompatible, you can leave it unspecified and just use the latest one. ripgrep is using “2018”, and we are doing the same. There is also a standard mechanism of distributing and updating the Rust toolchain through channels. rust_register_toolchains allows us to set the Rust toolchain version for all three channels (stable, beta and nightly) we want to use for our workspace. Regarding the rustc version, if we look at the ripgrep CI configuration, we notice it uses the nightly toolchain pinned by the dtolnay/rust-toolchain Github action. rust_register_toolchains allows us to omit the versions attribute. In this case, we will end up using the stable and nightly versions pinned by the version of rules_rust. One could reason this behavior is closer to what is done by ripgrep’s CI configuration, but I would rather suggest being more explicit for the sake of reproducibility and clarity:

rust_register_toolchains(
    edition = "2018",
    versions = [
        "1.70.0",
        "nightly/2023-06-01",
    ],
)

Here we’ve defined explicitly which versions of stable and nightly Rust we want to use in our build. By default, Bazel invokes rustc from the stable channel. For switching to nightly, we need to invoke the build with --@rules_rust//rust/toolchain/channel=nightly flag. To make nightly default, we can create a .bazelrc file inside the repository root and add this line:

build --@rules_rust//rust/toolchain/channel=nightly

External dependencies

Cargo makes it easy to specify the dependencies and build your Rust project on top of them. In Bazel we also need to explicitly declare all the external dependencies and it would be extremely painful to manually write BUILD files for every Rust crate our project depends on. Luckily, there are some tools to generate Bazel targets from Cargo.lock files, for external dependencies. The rules_rust documentation mentions two: crate_universe and cargo-raze. Since crate_universe is a successor to cargo-raze, included into the rules_rust package, I’m going to focus on this tool.

To configure it we need to add the following to our WORKSPACE file:

load("@rules_rust//crate_universe:repositories.bzl", "crate_universe_dependencies")

crate_universe_dependencies()

load("@rules_rust//crate_universe:defs.bzl", "crates_repository")

crates_repository(
    name = "crate_index",
    cargo_lockfile = "//:Cargo.lock",
    lockfile = "//:cargo-bazel-lock.json",
    manifests = [
        "//:Cargo.toml",
        "//:crates/globset/Cargo.toml",
        "//:crates/grep/Cargo.toml",
        "//:crates/cli/Cargo.toml",
        "//:crates/matcher/Cargo.toml",
        "//:crates/pcre2/Cargo.toml",
        "//:crates/printer/Cargo.toml",
        "//:crates/regex/Cargo.toml",
        "//:crates/searcher/Cargo.toml",
        "//:crates/ignore/Cargo.toml",
    ],
)

crates_repository creates a repository_rule containing targets for all external dependencies explicitly mentioned in the Cargo.toml files as dependencies. We need to specify several attributes:

  • cargo_lockfile: the actual Cargo.lock of the Cargo workspace.
  • lockfile: this is the file used by crate_universe to store metadata gathered from Cargo files. Initially it should be created empty, then it will be automatically updated and maintained by crate_universe.
  • manifests: the list of Cargo.toml files in the workspace.

Let’s create an empty lock file for crate_universe and the empty BUILD.bazel file for a so far empty Bazel package:

$ touch cargo-bazel-lock.json BUILD.bazel

Now we can run bazel sync to pin cargo dependencies as Bazel targets1:

$ CARGO_BAZEL_REPIN=1 bazel sync --only=crate_index

You should run this command whenever you update dependencies in Cargo files. We can also use a Bazel query to examine targets generated by crate_universe:

$ bazel query @crate_index//...
@crate_index//:aho-corasick
@crate_index//:base64
@crate_index//:bstr
@crate_index//:bytecount
@crate_index//:clap
@crate_index//:crossbeam-channel
@crate_index//:encoding_rs
@crate_index//:encoding_rs_io
@crate_index//:fnv
@crate_index//:glob
@crate_index//:jemallocator
@crate_index//:lazy_static
@crate_index//:log
@crate_index//:memchr
@crate_index//:memmap
@crate_index//:pcre2
@crate_index//:regex
@crate_index//:regex-automata
@crate_index//:regex-syntax
@crate_index//:same-file
@crate_index//:serde
@crate_index//:serde_derive
@crate_index//:serde_json
@crate_index//:srcs
@crate_index//:termcolor
@crate_index//:thread_local
@crate_index//:walkdir
@crate_index//:winapi-util

Here we can see that the @crate_index repository consists of targets for dependencies explicitly mentioned in the Cargo files.

Writing BUILD files and Cargo-Bazel parity

Now it’s time to build some crates in our workspace. Let’s look at the matcher crate for example, we will handle the rest of the crates the same way.

[package]
name = "grep-matcher"
version = "0.1.6"  #:version
authors = ["Andrew Gallant <jamslam@gmail.com>"]
description = """
A trait for regular expressions, with a focus on line oriented search.
"""
documentation = "https://docs.rs/grep-matcher"
homepage = "https://github.com/BurntSushi/ripgrep/tree/master/crates/matcher"
repository = "https://github.com/BurntSushi/ripgrep/tree/master/crates/matcher"
readme = "README.md"
keywords = ["regex", "pattern", "trait"]
license = "Unlicense OR MIT"
autotests = false
edition = "2018"

[dependencies]
memchr = "2.1"

[dev-dependencies]
regex = "1.1"

[[test]]
name = "integration"
path = "tests/tests.rs"

To build this crate with Bazel we create crate/matcher/BUILD.bazel:

load("@rules_rust//rust:defs.bzl", "rust_library")

rust_library(
    name = "grep-matcher",
    srcs = glob([
        "src/**/*.rs",
    ]),
    deps = [
        "@crate_index//:memchr,
    ],
    proc_macro_deps = [],
    visibility = ["//visibility:public"],
)

Here we simply define the Rust library according to the documentation. Bazel requires us to specify all dependencies explicitly, and since we are generating a @crate_index repository based on Cargo files, to add new dependencies, we’ll have to change Cargo files, run bazel sync and update BUILD files accordingly. This will create two sources of the same information that need to be synchronized manually, which is inconvenient and error-prone. Luckily, there are some handy functions in crate_universe to address this. We can rewrite the same BUILD file like this:

load("@crate_index//:defs.bzl", "aliases", "all_crate_deps")
load("@rules_rust//rust:defs.bzl", "rust_library")

rust_library(
    name = "grep-matcher",
    srcs = glob([
        "src/**/*.rs",
    ]),
    aliases = aliases(),
    deps = all_crate_deps(),
    proc_macro_deps = all_crate_deps(
        proc_macro = True,
    ),
    visibility = ["//visibility:public"],
)

We don’t need to specify dependencies explicitly any more. The all_crate_deps function returns the list of dependencies for the crate defined in the same directory as a BUILD file this function was called from, based on the gathered metadata saved in the cargo-bazel-lock.json file. To see the BUILD file with these functions expanded one could run:

$ bazel query //crates/matcher:grep-matcher  --output=build
rust_library(
  name = "grep-matcher",
  visibility = ["//visibility:public"],
  aliases = {},
  deps = ["@crate_index__memchr-2.5.0//:memchr"],
  proc_macro_deps = [],
  srcs = ["//crates/matcher:src/interpolate.rs", "//crates/matcher:src/lib.rs"],
)

We need aliases = aliases() here in case the crate is using dependency renaming. There is an example of it in the searcher crate:

memmap = { package = "memmap2", version = "0.5.3" }

Otherwise we would have to write explicitly:

aliases = {
    "@crate_index//:memmap2": "memmap",
}

This allows us to have Cargo files as a single source of external dependencies, so when we need to add a new dependency, for example, we could just use cargo add and repin Bazel dependencies with CARGO_BAZEL_REPIN=1 bazel sync --only=crate_index. An important limitation is that crate_universe ignores path dependencies. This means we need to manually specify internal dependencies inside the workspace. We’ll see how this works later.

Next, it’s time to migrate the tests. Crate grep-matcher has two types of tests: unit and integration. Unit tests are defined in the source files of each library, while integration tests have their own source files. Let’s migrate the unit tests first:

load("@rules_rust//rust:defs.bzl", "rust_test")
rust_test(
    name = "tests",
    crate = ":grep-matcher",
    aliases = aliases(
        normal_dev = True,
        proc_macro_dev = True,
    ),
    deps = all_crate_deps(
        normal_dev = True,
    ),
    proc_macro_deps = all_crate_deps(
        proc_macro_dev = True,
    ),
)

We are using the crate attribute instead of srcs here because those tests don’t have their own sources.

And here is a declaration for integration tests:

rust_test(
    name = "integration",
    srcs = glob([
        "tests/**/*.rs",
    ]),
    crate_root = "tests/tests.rs",
    deps = all_crate_deps(
        normal_dev = True
    ) + [
        ":grep-matcher",
    ],
    proc_macro_deps = all_crate_deps(
        proc_macro_dev = True
    ),
)

Here we’ve added the dependency on the crate and used crate_root since the name of the target and the name of the main file are different.

Build and test ripgrep binary

Let’s look at the root Cargo.toml file:

build = "build.rs"
...
[[bin]]
bench = false
path = "crates/core/main.rs"
name = "rg"
...
[[test]]
name = "integration"
path = "tests/tests.rs"
...
[dependencies]
...
grep = { version = "0.2.12", path = "crates/grep" }
ignore = { version = "0.4.19", path = "crates/ignore" }

First, we need to deal with Cargo invoking build.rs before compiling the binary. And again, there is a rule created specifically for this in rules_rust:

load("@rules_rust//cargo:defs.bzl", "cargo_build_script")

cargo_build_script(
    name = "build",
    srcs = [
        "build.rs",
        "crates/core/app.rs",
    ],
    deps = all_crate_deps(
        normal = True,
        build = True,
    ) + [
        "//crates/grep",
        "//crates/ignore",
    ],
    crate_root = "build.rs",
)

The build.rs file imports code from crates/core/app.rs to generate shell completions for example, which in turn depends on some crates from the workspace. Now we can build and test the rg binary:

rust_binary(
    name = "rg",
    srcs = glob([
        "crates/core/**/*.rs",
    ]),
    aliases = aliases(),
    deps = all_crate_deps() + [
        "//crates/grep",
        "//crates/ignore",
        ":build",
    ],
    proc_macro_deps = all_crate_deps(
        proc_macro = True,
    ),
    visibility = ["//visibility:public"],
)

Let’s see how it works:

$ bazel build //:rg
INFO: Analyzed target //:rg (156 packages loaded, 2780 targets configured).
INFO: Found 1 target...
Target //:rg up-to-date:
  bazel-bin/rg
INFO: Elapsed time: 111.921s, Critical Path: 101.86s
INFO: 238 processes: 115 internal, 123 linux-sandbox.
INFO: Build completed successfully, 238 total actions
$ ./bazel-bin/rg
error: The following required arguments were not provided:
    <PATTERN>

USAGE:

    rg [OPTIONS] PATTERN [PATH ...]
    rg [OPTIONS] -e PATTERN ... [PATH ...]
    rg [OPTIONS] -f PATTERNFILE ... [PATH ...]
    rg [OPTIONS] --files [PATH ...]
    rg [OPTIONS] --type-list
    command | rg [OPTIONS] PATTERN
    rg [OPTIONS] --help
    rg [OPTIONS] --version

For more information try --help

Voilà! You can also find artifacts created by build.rs in the ./bazel-bin/build.out_dir directory.

Now we can add top-level tests in the same way we did for grep-matcher and other crates, with the only difference being: the rg integration tests are using some data files in tests/data directory, so we need to specify this explicitly in the BUILD file:

rust_test(
    name = "tests",
    crate = ":rg",
    deps = all_crate_deps(
        normal_dev = True,
    ),
    proc_macro_deps = all_crate_deps(
        proc_macro_dev = True,
    ),
)

rust_test(
    name = "integration",
    srcs = glob([
        "tests/**/*.rs",
    ]),
    deps = all_crate_deps(
        normal = True,
        normal_dev = True,
    ),
    data = glob([
        "tests/data/**",
    ]),
    proc_macro_deps = all_crate_deps(
        proc_macro_dev = True
    ),
    crate_root = "tests/tests.rs",
)

Now we can execute all our tests:

$ bazel test //...

Improving hermeticity

One of Bazel’s main concerns is the hermeticity of builds. It means Bazel aims to always return the same output for the same input source code and product configuration by isolating the build from changes to the host system. One of the major sources of non-hermetic builds in Rust, in turn, is Cargo build scripts executed when compiling dependencies. For example, crates with Rust bindings to some well-known C-libraries usually have a build script that looks up a dynamic library to link globally in your system. Usual practice for such libraries is to use pkg-config crate for this lookup, so if you have pkg-config somewhere in the dependency chain, chances are high that your build is not hermetic. crate_universe generates cargo_build_script targets for dependencies automatically, so we’ll have the same problem in our Bazel build. Let’s look at what we have so far:

$ bazel query "deps(//:rg)" | grep pkg_config
@crate_index__pkg-config-0.3.27//:pkg_config

Okay, let’s try to figure out which library brings in this dependency:

$ bazel query "allpaths(//:rg, @crate_index__pkg-config-0.3.27//:pkg_config)"
//:build
//:build_
//:rg
//crates/grep:grep
//crates/pcre2:grep-pcre2
@crate_index__pcre2-0.2.3//:pcre2
@crate_index__pcre2-sys-0.2.5//:build_script_build
@crate_index__pcre2-sys-0.2.5//:pcre2-sys_build_script
@crate_index__pcre2-sys-0.2.5//:pcre2-sys_build_script_
@crate_index__pcre2-sys-0.2.5//:pcre2_sys
@crate_index__pkg-config-0.3.27//:pkg_config

Now it’s clear there is only one library using it: pcre2-sys. If we look at its build script we’ll see that it looks for libpcre2 unless the environment variable PCRE2_SYS_STATIC is set. In this case, it builds the static library libpcre2.a from sources and links with it. This means that in order to make our build hermetic, we need to pass this environment variable to the build script automatically generated by crate_universe for pcre2-sys. Fortunately, there is a tool for this in crate_universe. Let’s go back to the WORKSPACE file and change it a bit:

load("@rules_rust//crate_universe:defs.bzl", "crates_repository", "crate")

crates_repository(
    name = "crate_index",
    cargo_lockfile = "//:Cargo.lock",
    lockfile = "//:cargo-bazel-lock.json",
    manifests = [
        "//:Cargo.toml",
        "//:crates/globset/Cargo.toml",
        "//:crates/grep/Cargo.toml",
        "//:crates/cli/Cargo.toml",
        "//:crates/matcher/Cargo.toml",
        "//:crates/pcre2/Cargo.toml",
        "//:crates/printer/Cargo.toml",
        "//:crates/regex/Cargo.toml",
        "//:crates/searcher/Cargo.toml",
        "//:crates/ignore/Cargo.toml",
    ],
    annotations = {
        "pcre2-sys": [crate.annotation(
            build_script_env = {
                "PCRE2_SYS_STATIC": "1",
            }
        )],
    },
)

Important note about incrementality

Since Rust 1.24 rustc has incremental compilation feature. Unfortunately, it is not supported in rules_rust yet, which makes a crate the smallest possible compilation unit for Rust project in Bazel. For some crates, it could significantly increase the compilation time for an arbitrary code change. Nevertheless there is some ongoing work in rules_rust to support incremental features of rustc: here, here and here

Conclusion

Now we have a fully hermetic Bazel build for ripgrep2. You can find the complete implementation here. It keeps Cargo files as a source of truth regarding external dependencies and project structure, which helps with managing those dependencies or IDE setup, since IDEs can use Cargo files to configure themselves. There is still work to be done to automate path dependencies and there are some projects out there aiming for that. Maybe we’ll look at it closer next time. Stay tuned!


  1. You can find more details about this command here
  2. Well, technically not completely hermetic; Bazel still picks up CC toolchain from the system. There are some resources regarding hermetic CC toolchains in Bazel here, here and here.

July 27, 2023 12:00 AM

July 22, 2023

Jasper Van der Jeugt

Lazy Layout

Prelude

This blogpost is written in reproducible Literate Haskell, so we need some imports first.

Show me the exact imports…
{-# LANGUAGE DeriveFoldable    #-}
{-# LANGUAGE DeriveFunctor     #-}
{-# LANGUAGE DeriveTraversable #-}
module Main where
import qualified Codec.Picture          as JP
import qualified Codec.Picture.Types    as JP
import           Control.Monad.ST       (runST)
import           Data.Bool              (bool)
import           Data.Foldable          (for_)
import           Data.List              (isSuffixOf, partition)
import           Data.List.NonEmpty     (NonEmpty (..))
import           System.Environment     (getArgs)
import           System.Random          (RandomGen, newStdGen)
import           System.Random.Stateful (randomM, runStateGen)
import           Text.Read              (readMaybe)

Introduction

Haskell is not my only interest — I have also been quite into photography for the past decade. Recently, I was considering moving some of the stuff I have on various social networks to a self-hosted solution.

Tumblr in particular has a fairly nice way to do photo sets, where these can be organised in rows and columns. I wanted to see if I could mimic this in a recursive way, where rows and columns can be subdivided further.

One important constraint is that is that we want to present each picture as the photographer envisioned it: concretely, we can scale it up or down (preserving the aspect ratio), but we can’t crop out parts.

Order is also important in photo essays, so we want the author to specify the photo collage in a declarative way by indicating if horizontal (H) or vertical (V) subdivision should be used, creating a tree. For example:

H img1.jpg
  (V img2.jpg
     (H img3.jpg
        img4.jpg))

The program should then determine the exact size and position of each image, so that we get a fully filled rectangle without any borders or filler:

We will use a technique called circular programming that builds on Haskell’s laziness to achieve this in an elegant way. These days, it is maybe more commonly referred to as the repmin problem. This was first described by Richard S. Bird in “Using circular programs to eliminate multiple traversals of data” in 1984, which predates Haskell!

Give me a refresher on repmin please…

Interlude: repmin

Given a simple tree type:

data Tree a
  = Leaf a
  | Branch (Tree a) (Tree a)

We would like to write a function repmin which replaces each value in each Leaf with the global minimum in the tree. This is easily done by first finding the global minimum, and then replacing it everywhere:

repmin_2pass :: Ord a => Tree a -> Tree a
repmin_2pass t =
  let globalmin = findmin t in rep globalmin t
 where
  findmin (Leaf x)     = x
  findmin (Branch l r) = min (findmin l) (findmin r)

  rep x (Leaf _)     = Leaf x
  rep x (Branch l r) = Branch (rep x l) (rep x r)

However, this requires two passes over the tree. We can do better by using Haskell’s laziness:

repmin_1pass :: Ord a => Tree a -> Tree a
repmin_1pass t = t'
 where
  (t', globalmin) = repmin t

  repmin (Leaf   x)   = (Leaf globalmin, x)
  repmin (Branch l r) =
    (Branch l' r', min lmin rmin)
   where
    (l', lmin) = repmin l
    (r', rmin) = repmin r

There is an apparent circular dependency here, where repmin uses globalmin, but also computes it. This is possible because we never need to evaluate globalmin – it can be stored as a thunk. For more details, please see the very accessible original paper (https://doi.org/10.1007/BF00264249).

Starting out with some types

We start out by giving an elegant algebraic definition for a collage:

data Collage a
  = Singleton  a
  | Horizontal (Collage a) (Collage a)
  | Vertical   (Collage a) (Collage a)
  deriving (Foldable, Functor, Show, Traversable)

We use a higher-order type, which allows us to work with collages of filepaths as well as actual images (among other things). deriving instructs the compiler to generate some boilerplate code for us. This allows us to concisely read all images using traverse:

readCollage
  :: Collage FilePath
  -> IO (Collage (JP.Image JP.PixelRGB8))
readCollage = traverse $ \path ->
  JP.readImage path >>=
  either fail (pure . JP.convertRGB8)

We use the JuicyPixels library to read and write images. The image type in this library can be a bit verbose since it is parameterised around the colour space.

During the layout pass, we don’t really care about this complexity. We only need the relative sizes of the images and not their content. We introduce a typeclass to do just that:

data Size = Sz
  { szWidth  :: Rational
  , szHeight :: Rational
  } deriving (Show)

class Sized a where
  -- | Retrieve the width and height of an image.
  -- Both numbers must be strictly positive.
  sizeOf :: a -> Size

We use the Rational type for width and height. We are only subdividing the 2D space, so we do not need irrational numbers, and having infinite precision is convenient.

The instance for the JuicyPixels image type is simple:

instance Sized (JP.Image p) where
  sizeOf img = Sz
    { szWidth  = fromIntegral $ JP.imageWidth  img
    , szHeight = fromIntegral $ JP.imageHeight img
    }

Laying out two images

If we look at the finished image, it may seem like a hard problem to find a configuration that fits all the images with a correct aspect ratio.

But we can use induction to arrive at a fairly straightforward solution. Given two images, it is always possible to put them beside or above each other by scaling them up or down to match them in height or width respectively. This creates a bigger image. We can then repeat this process until just one image is left.

However, this is quite a naive approach since we end up making way too many copies, and the repeated resizing could also result in a loss of resolution. We would like to compute the entire layout first, and then render everything in one go. Still, we can start by formalising what happens for two images and then work our way up.

We can represent the layout of an individual image by its position and size. We use simple (x, y) coordinates for the position and a scaling factor (relative to the original size of the image) for its size.

data Transform = Tr
  { trX     :: Rational
  , trY     :: Rational
  , trScale :: Rational
  } deriving (Show)

Armed with the Size and Transform types, we have enough to tackle the “mathy” bits.

Let’s look at the horizontal case first and write a function that computes a transform for both left and right images, as well as the size of the result.

horizontal :: Size -> Size -> (Transform, Transform, Size)
horizontal (Sz lw lh) (Sz rw rh) =

We want to place image l beside image r, producing a nicely filled rectangle. Intuitively, we should be matching the height of both images.

There are different ways to do this — we could shrink the taller image, enlarge the shorter image, or something in between. We make a choice to always shrink the taller image, as this doesn’t compromise the sharpness of the result.

  let height = min lh rh
      lscale = height / lh
      rscale = height / rh
      width  = lscale * lw + rscale * rw in

With the scale for both left and right images, we can compute the left and right transforms. The left image is simply placed at (0, 0) and we need to offset the right image depending on the (scaled) size of the left image.

  ( Tr 0             0 lscale
  , Tr (lscale * lw) 0 rscale
  , Sz width height
  )

Composing images vertically is similar, just matching the widths rather than the heights of the two images and moving the bottom image below the top one:

vertical :: Size -> Size -> (Transform, Transform, Size)
vertical (Sz tw th) (Sz bw bh) =
  let width  = min tw bw
      tscale = width / tw
      bscale = width / bw
      height = tscale * th + bscale * bh in
  ( Tr 0 0             tscale
  , Tr 0 (tscale * th) bscale
  , Sz width height
  )

Composing transformations

Now that we’ve solved the problem of combining two images and placing them, we can apply this to our tree of images. To this end, we need to compose multiple transformations.

Whenever we think about composing things in Haskell, it’s good to ask ourselves if what we’re trying to compose is a Monoid. A Monoid needs an identity element (mempty) and a Semigroup instance, the latter of which contains just an associative binary operator (<>).

The identity transform is just offsetting by 0 and scaling by 1:

instance Monoid Transform where
  mempty = Tr 0 0 1

Composing two transformations using <> requires a bit more thinking. In this case, a <> b means applying transformation a after transformation b, so we will need to apply the scale of b to all parts of a:

instance Semigroup Transform where
  Tr ax ay as <> Tr bx by bs =
    Tr (ax * bs + bx) (ay * bs + by) (as * bs)

Readers who are familiar with linear algebra may recognise the connection to a sort of restricted affine 2D transformation matrix.

Proving that the identity holds on mempty is simple so we will only do one side, namely a <> mempty == a.

Proof of Monoid right identity…
Tr ax ay as <> mempty

-- Definition of mempy
= Tr ax ay as <> Tr 0 0 1

-- Definition of <>
= Tr (ax * 1 + 0) (ay * 1 + 0) (as * 1)

-- Cancellative property of 0 over +
-- Identity of 1 over *
= Tr ax ay as

Next, we want to prove that the <> operator is associative, meaning a <> (b <> c) == (a <> b) <> c.

Proof of associativity…
Tr ax ay as <> (Tr bx by bs <> Tr cx cy cs)

-- Definition of <>
= Tr (ax * (bs * cs) + (bx * cs + cx))
     (ay * (bs * cs) + (by * cs + cy))
     (as * (bs * cs))

-- Associativity of * and +
= Tr (ax * bs * cs + bx * cs + cx)
     (ay * bs * cs + by * cs + cy)
     ((as * bs) * cs)

-- Distributivity of * over +
= Tr ((ax * bs + bx) * cs + cx)
     ((ay * bs + by) * cs + cy)
     ((as * bs) * cs)

-- Definition of <>
= (Tr ax ay as <> T b by bs) <> Tr cx cy cs

Now that we have a valid Monoid instance, we can use the higher-level <> and mempty concepts in our core layout algorithm, rather than worrying over details like (x, y) coordinates and scaling factors.

The lazy layout

Our main layoutCollage function takes the user-specified tree as input, and annotates each element with a Transform. In addition to that, we also produce the Size of the final image so we can allocate space for it.

layoutCollage
  :: Sized img
  => Collage img
  -> (Collage (img, Transform), Size)

All layoutCollage does is call layout — our circular program — with the identity transformation:

layoutCollage = layout mempty

layout takes the size and position of the current element as an argument, and determines the sizes and positions of a tree recursively.

There are some similarities with the algorithms present in browser engines, where a parent element will first lay out its children, and then use their properties to determine its own width.

However, we will use Haskell’s laziness to do this in a single top-down pass. We provide a declarative algorithm and we leave the decision about what to calculate when — more concretely, propagating the requested sizes of the children back up the tree before constructing the transformations — to the compiler!

layout
  :: Sized img
  => Transform
  -> Collage img
  -> (Collage (img, Transform), Size)

Placing a single image is easy, since we are receiving the transformation directly as an argument. We return the requested size — which is just the original size of the image. This is an important detail in making the laziness work here: if we tried to return the final size (including the passed in transformation) rather than the requested size, the computation would diverge (i.e. recurse infinitely).

layout trans (Singleton img) =
  (Singleton (img, trans), sizeOf img)

In the recursive case for horizontal composition, we call the horizontal helper we defined earlier with the left and right image sizes as arguments. This gives us both transformations, that we can then pass in as arguments to layout again – returning the left and right image sizes we pass in to the horizontal helper, forming our apparent circle.

layout trans (Horizontal l r) =
  (Horizontal l' r', size)
 where
  (l', lsize)            = layout (ltrans <> trans) l
  (r', rsize)            = layout (rtrans <> trans) r
  (ltrans, rtrans, size) = horizontal lsize rsize

The same happens for the vertical case:

layout trans (Vertical t b) =
  (Vertical t' b', size)
 where
  (t', tsize)            = layout (ttrans <> trans) t
  (b', bsize)            = layout (btrans <> trans) b
  (ttrans, btrans, size) = vertical tsize bsize

It’s worth thinking about why this works: the intuitive explanation is that we can “delay” the execution of the transformations until the very end of the computation, and then fill them in everywhere. This works since no other parts of the algorithm depend on the transformation, only on the requested sizes.

Conclusion

We’ve written a circular program! Although I was aware of repmin for a long time, it’s not a technique I’ve applied often. To me, it is quite interesting because, compared to repmin:

  • it is easier to explain to a novice why this is useful;
  • it is perhaps easier to understand due to the visual aspect; and
  • it is an example outside of the realm of parsers and compilers.

The structure is also somewhat different; rather than having a circular step at the top-level function invocation, we have it at every step of the recursion.

Thanks to Francesco Mazzoli and Titouan Vervack reading a draft of this blogpost and suggesting improvements. And thanks to you for reading!

What follows below are a number of relatively small functions that take care of various tasks, included so this can function as a standalone program:

Appendices

Rendering the result

Once we’ve determined the layout, we still need to apply it and draw all the images using the computed transformations. We use simple nearest-neighbour scaling since that is not the focus of this program, you could consider Lánczos interpolation in a real application.

render
  :: Foldable f
  => Size
  -> f (JP.Image JP.PixelRGB8, Transform)
  -> JP.Image JP.PixelRGB8
render (Sz width height) images = runST $ do
  canvas <- JP.createMutableImage (round width) (round height) black
  for_ images $ transform canvas
  JP.unsafeFreezeImage canvas
 where
  black = JP.PixelRGB8 0 0 0

  transform canvas (img, Tr dstX dstY dstS) =
    for_ [round dstX .. round (dstX + dstW) - 1] $ \outX ->
    for_ [round dstY .. round (dstY + dstH) - 1] $ \outY ->
      let inX = min (JP.imageWidth img - 1) $ round $
                fromIntegral (outX - round dstX) / dstS
          inY = min (JP.imageHeight img - 1) $ round $
                fromIntegral (outY - round dstY) / dstS in
      JP.writePixel canvas outX outY $ JP.pixelAt img inX inY
   where
    dstW = fromIntegral (JP.imageWidth img)  * dstS
    dstH = fromIntegral (JP.imageHeight img) * dstS

Parsing a collage description

We use a simple parser to allow the user to specify collages as a string, for example on the command line. This is a natural fit for polish notation as using parentheses in command line arguments is very awkward.

As an example, we want to parse the following arguments:

H img1.jpg V img2.jpg H img3.jpg img4.jpg

Into this tree:

(Horizontal "img1.jpg"
  (Vertical "img2.jpg")
  (Horizontal "img3.jpg" "img4.jpg"))

We don’t even need a parser library, we can just treat the arguments as a stack:

parseCollage :: [String] -> Maybe (Collage FilePath)
parseCollage args = do
  (tree, []) <- parseTree args
  pure tree
 where
  parseTree []             = Nothing
  parseTree ("H" : stack0) = do
    (x, stack1) <- parseTree stack0
    (y, stack2) <- parseTree stack1
    pure (Horizontal x y, stack2)
  parseTree ("V" : stack0) = do
    (x, stack1) <- parseTree stack0
    (y, stack2) <- parseTree stack1
    pure (Vertical x y, stack2)
  parseTree (x   : stack0) = Just (Singleton x, stack0)

Generating random collages

In order to test this program, I also added some functionality to generate random collages.

randomCollage :: RandomGen g => NonEmpty a -> g -> (Collage a, g)
randomCollage ne gen = runStateGen gen $ \g -> go g ne
 where

The utility rc picks a random constructor.

  rc g = bool Horizontal Vertical <$> randomM g

In our worker function, we keep one item on the side (x), and randomly decide if other items will go in the left or right subtree:

  go g (x :| xs) = do
    (lts, rts) <- partition snd <$>
      traverse (\y -> (,) y <$> randomM g) xs

Then, we look at the random partitioning we just created. If they’re both empty, the only thing we can do is create a singleton collage:

    case (map fst lts, map fst rts) of
      ([],       [])       -> pure $ Singleton x

If either of them is empty, we put x in the other partition to ensure we don’t create invalid empty trees:

      ((l : ls), [])       -> rc g <*> go g (l :| ls) <*> go g (x :| [])
      ([],       (r : rs)) -> rc g <*> go g (x :| []) <*> go g (r :| rs)

Otherwise, we decide at random which partition x goes into:

      ((l : ls), (r : rs)) -> do
        xLeft <- randomM g
        if xLeft
          then rc g <*> go g (x :| l : ls) <*> go g (r :| rs)
          else rc g <*> go g (l :| ls)     <*> go g (x :| r : rs)

Putting together the CLI

We support two modes of operation for our little CLI:

  • Using a user-specified collage using the parser we wrote before.
  • Generating a random collage from a number of images.

In both cases, we also take an output file as the first argument, so we know where we want to write the image to. We also take an optional -fit flag so we can resize the final image down to a requested size.

data Command = Command
  { cmdOut     :: FilePath
  , cmdFit     :: Maybe Int
  , cmdCollage :: CommandCollage
  }

data CommandCollage
  = User   (Collage FilePath)
  | Random (NonEmpty FilePath)
  deriving (Show)

There is some setup to parse the output and a -fit flag. The important bit happens in parseCommandCollage further down.

parseCommand :: [String] -> Maybe Command
parseCommand cmd = case cmd of
  [] -> Nothing
  ("-fit" : num : args) | Just n <- readMaybe num -> do
    cmd' <- parseCommand args
    pure cmd' {cmdFit = Just n}
  (o : args) -> Command o Nothing <$> parseCommandCollage args

We’ll use R for a random collage, and H/V will be parsed by parseCollage.

 where
  parseCommandCollage ("R" : x : xs) = Just $ Random (x :| xs)
  parseCommandCollage spec           = User <$> parseCollage spec

Time to put everything together in the main function. First, we do some parsing:

main :: IO ()
main = do
  args <- getArgs
  command <- maybe (fail "invalid command") pure $
    parseCommand args
  pathsCollage <- case cmdCollage command of
    User explicit -> pure explicit
    Random paths -> do
      gen <- newStdGen
      let (random, _) = randomCollage paths gen
      pure random

Followed by actually reading in all the images:

  imageCollage <- readCollage pathsCollage

This gives us the Collage (JP.Image JP.PixelRGB8). We can pass that to our layout function and write it to the output, after optionally applying our fit:

  let (result, box) = case cmdFit command of
        Nothing -> layoutCollage imageCollage
        Just f  -> fit f $ layoutCollage imageCollage
  write (cmdOut command) $ JP.ImageRGB8 $ render box result
 where
  write output
    | ".jpg" `isSuffixOf` output = JP.saveJpgImage 80 output
    | otherwise                  = JP.savePngImage output

Resizing the result

Most of the time I don’t want to host full-resolution pictures for web viewing. This is an addition I added later on to resize an image down to a requested “long edge” (i.e. a requested maximum width or height, whichever is bigger).

Interestingly I think this can also be done by having an additional parameter to layout, and using circular programming once again to link the initial transformation to the requested size. However, the core algorithm is harder to understand that way, so I left it as a separate utility:

fit
  :: Int
  -> (Collage (img, Transform), Size)
  -> (Collage (img, Transform), Size)
fit longEdge (collage, Sz w h)
  | long <= fromIntegral longEdge = (collage, Sz w h)
  | otherwise                     =
      (fmap (<> tr) <$> collage, Sz (w * scale) (h * scale))
 where
  long  = max w h
  scale = fromIntegral longEdge / long
  tr    = Tr 0 0 scale

by Jasper Van der Jeugt at July 22, 2023 12:00 AM

July 20, 2023

Tweag I/O

How to Prevent GHC from Inferring Types with Undesirable Constraints

One classic appeal of Haskell is that its type system allows experts to define very precise constraints within the program’s problem domain. In my working experience, such powerful constraints are a double-edged sword for projects with long lifespans. These sophisticated types do, as promised, statically prevent many kinds of costly mistakes and are indeed expressed via definitions that resemble the particular problem domain better than many other general purpose languages would allow. On the other hand, the requisite definitions tend to be dense and leak noisy details of how exactly the domain-specific constraints were encoded in the GHC Haskell type system. This is sadly often apparent in the poor quality of error messages that arise when these type-level programs catch a user’s mistake. So while use of these precise problem-specific types provides very desirable static checks, they also often frustrate or intimidate less expert contributors — including the original authors when they return to the project after a few months!

In this post, I present a technique that can help mitigate that downside. I call the technique Ill-Formedness Indicators. For a broad class of domain-specific constraints/languages, this is the first technique I’ve learned of that enables custom compile-time error messages in the particularly insidious scenario where the user’s mistake incurs polymorphism that the library author did not intend. I’ll explain with a small and concrete example and then list some generalizations that would also benefit from Ill-Formedness Indicators in the exact same way.

The mechanism underlying Ill-Formedness Indicators is introduced by Csongor Kiss’s blog post Detecting the undetectable: custom type errors for stuck type families. My contribution is to emphasize that it applies to type classes in addition to type families, and to spell out how and when it can be applied.

The complete and self-contained code for this blog post can be found here.

Motivation

My working example for this technique is the definition of a smart constructor for the conventional heterogeneous vector data type. I include the signature of the vrev function here only for completeness; its definition is uninteresting.

data Vec :: [Type] -> Type where
    VNil  ::                Vec '[]
    VCons :: x -> Vec xs -> Vec (x : xs)

-- | Reverse the vector
vrev :: VRev xs xs' => Vec xs -> Vec xs'

My example smart constructor is named litVec. It’s smart because it can be applied to any number of arguments. In Haskell, such variadic functions can only be defined via type-level programming, which is why litVec is a useful example for this blog post.

Here are some examples of the intended use of litVec and its auxiliary function variadic, which merely delineates the end of the variable-length list of arguments. Unfortunately, neither has a helpful type signature, as you will see in the small definition immediately below.

> :type variadic (litVec 'c' True 3)
variadic (litVec 'c' True 3)
  :: Num t => Vec '[Char, Bool, t]
> variadic (litVec 'c' True 3)
VCons 'c' (VCons True (VCons 3 VNil))
> variadic $ litVec 'c' True
VCons 'c' (VCons True VNil)
> variadic (litVec True)
VCons True VNil
> variadic litVec
VNil

This first definition of litVec does not use an Ill-Formedness Indicator, so that its misbehavior can motivate that enrichment.

{-# LANGUAGE DataKinds, FlexibleContexts, FlexibleInstances,
             MultiParamTypeClasses, TypeFamilies, TypeOperators,
             UndecidableInstances #-}

module LitVec (litVec, variadic) where

import Vec (Vec (VCons, VNil), VRev (vrev))

-- | NOT EXPORTED
class MkLitVec xs a where mkLitVec :: Vec xs -> a

instance MkLitVec (x : xs) a => MkLitVec xs (x -> a) where
    mkLitVec acc = \x -> mkLitVec (VCons x acc)

-- | NOT EXPORTED
newtype Variadic a = MkVariadic { {- | See 'litVec' -} variadic :: a }

instance (VRev xs xs', a ~ Vec xs') => MkLitVec xs (Variadic a) where
    mkLitVec acc = MkVariadic (vrev acc)

-- | A variadic vector constructor
--
-- Use it like this: @'variadic' ('litVec' a b c)@
litVec :: MkLitVec '[] a => a
litVec = mkLitVec VNil

In words: the first instance handles each argument to litVec, and the second instance handles when the user finally applies variadic, which indicates there will be no further arguments. The first instance merely pushes the arguments on to a stack, and the second instance reverses that stack to yield the final vector in the same left-to-right order that the user wrote the arguments in. Only litVec and variadic need be exported; they constitute the entire interface supporting the intended use.

The key mistake the user can make would be to use litVec without the application of variadic. The variadic function is how the user delimits the end of the arguments to litVec. Together that pair of identifiers effectively provides literal syntax for the Vec type, and so is analogous to the pair of [ and ] for lists. (It’s a very small change to instead enable the syntax beginVec x y z endVec, but as far as I know that requires -XIncoherentInstances, which would necessitate a tangential explanation of its own.)

If the user is confused or simply forgets to call variadic, ideally GHC would tell them as much, just as it does when they forget the ] for a list literal. Unfortunately, that is not the case with the above conventional definition of litVec.

> litVec 'c' True 3

<interactive>:23:1: error:Could not deduce (MkLitVec '[t0, Bool, Char] t1)
      from the context: (MkLitVec '[t, Bool, Char] t1, Num t)
        bound by the inferred type forit:
                   forall t t1. (MkLitVec '[t, Bool, Char] t1, Num t) => t1
        at <interactive>:23:1-16
      The type variablet0is ambiguousIn the ambiguity check for the inferred type foritTo defer the ambiguity check to use sites, enable AllowAmbiguousTypes
      When checking the inferred type
        it :: forall t1 t2. (MkLitVec '[t1, Bool, Char] t2, Num t1) => t2

Nothing about that error message indicates that the user forgot to apply variadic. Worse still, the message actually recommends the AllowAmbiguousTypes language extension, which is notorious for tricking novice users into deferring the error message to the call-site, much farther away from their actual mistake! An expert would eventually intuit the problem, but, without revisiting the definition of MkLitVec, even they would fail to determine the intended use requires variadic.

Thus this definition of litVec is a disappointment; library users should not have to inspect the implementation details in order to understand the error message induced by a simple and common mistake. This is the problem that Ill-Formedness Indicators let the library author solve. The only other way I know for a library author to address this kind of mistake is to warn about it loudly in the library documentation. In my opinion, that option fails to meet the Haskell community’s general expectations around type safety.

Demonstration

Once I add Ill-Formedness Indicators to the definition of litVec, the user instead sees the following custom compile-time error message, which explains the exact mistake we’re targeting.

> litVecIFI 'c' True 3

<interactive>:25:1: error:Likely accidental use of `litVecIFI' outside of `variadic'!When checking the inferred type
        it :: forall t1 t2.
              (MkLitVecIFI (TypeError ...) '[t1, Bool, Char] t2, Num t1) =>
              t2

Ill-Formedness Indicators can be summarized in a single sentence.

The library author must ensure that any constraint they never want the user to see includes a subexpression that is an application of the special TypeError family introduced in GHC 8.0.2 to the desired error message.

That subexpression is the eponymous indicator. If one is present in a constraint that the typechecker couldn’t simplify where it originally arose, then that TypeError will trigger when GHC is checking the inferred type. This ensures the user sees the bespoke error message instead of the standard Could not deduce error that contains the inscrutable implementation details of the internal constraints, which the library author would rather users never have to consider.

The enriched definition of litVecIFI is very similar to the original litVec definition above: I only need to add a Void parameter to the class, propagate it, and initialized it with a specific TypeError in the signature of litVecIFI. My specific recommendation for Ill-Formedness Indicators is — whenever possible — to add a new parameter of kind Void to the type classes used in internal constraints and confine the indicator TypeError arguments to that new parameter. Even in more sophisticated use cases where some pre-existing parameters might already be able to host the indicators, I favor adding a new argument of kind Void in order to more cleanly separate the encoding noise of the Ill-Formedness Indicators from the classes’ actual logic. The Void kind (ideally) has no “matchable” inhabitants, which ensures it cannot influence instance selection for its class.

-- | NOT EXPORTED
class MkLitVecIFI (ifi :: Data.Void.Void) xs a where
    mkLitVecIFI :: Proxy ifi -> Vec xs -> a

instance MkLitVecIFI ifi (x : xs) a => MkLitVecIFI ifi xs (x -> a) where
    mkLitVecIFI ifi acc = \x -> mkLitVecIFI ifi (VCons x acc)

-- | NOT EXPORTED
newtype Variadic a = MkVariadic { {- | See 'litVecIFI' -} variadic :: a }

instance (a ~ Vec xs', VRev xs xs') => MkLitVecIFI ifi xs (Variadic a) where
    mkLitVecIFI _ifi acc = MkVariadic (vrev acc)

-- | A variadic vector constructor
--
-- Use it like this: @'variadic' ('litVecIFI' a b c)@
litVecIFI :: MkLitVecIFI (DelayTypeError LitVecErrMsg) '[] a => a
litVecIFI = mkLitVecIFI (Proxy :: Proxy (DelayTypeError LitVecErrMsg)) VNil

type LitVecErrMsg =
  GHC.TypeLits.Text
    "Likely accidental use of `litVecIFI' outside of `variadic'!"

-- | If I directly used 'TypeError' above instead of 'DelayTypeError', GHC
-- would raise the error while typechecking the 'litVecIFI' definition. This
-- alias's single layer of indirection is enough to prevent that.
-- <https://gitlab.haskell.org/ghc/ghc/-/issues/20241> should happily supplant
-- this in future GHCs.
--
-- Note: this can be defined elsewhere and reused amongst many uses of
-- Ill-Formedness Indicators.
type family DelayTypeError (err :: ErrorMessage) :: Void where
    DelayTypeError err = GHC.TypeLits.TypeError err

That’s all there is to it, just the extra class parameter and the concrete error messages. The user will now see the clear and helpful error message when they forget to apply variadic, as shown at the top of this section.

Ill-Formedness Indicators do unavoidably add some noise to the definitions, but it is minimal, merely propagating through the Void parameter. Only a new GHC language extension, pragma, or similar could further reduce the boilerplate.

Why It Works

Csongor Kiss’s blog post explains the actual mechanics in detail. If only for the sake of self-containedness, I complement that here by providing a higher-level explanation.

In general, Ill-Formedness Indicators provide a way to opt-out of Haskell’s open-world assumption for some constraints, such as the constraint in the signature of litVecIFI. When type-checking, if a value definition requires some constraints that are not fully solved by whatever instance declarations are already in-scope, GHC infers the definition has type cx => ... where cx is the residual constraints. That inference is justified by the open-world assumption, which acknowledges that a call site might concretize some of the type variables so that those same instances now suffice or it could merely have more instances in-scope. And if the constraint is still stuck at the call site, then the cycle repeats: GHC will also infer a type context for that enclosing definition, which has its own call sites, and so on. This cycle only stops under two circumstances. Either it reaches a point in the program where constrained types are not allowed, such as an executable’s main entrypoint, the Read-Eval-Print-Loop, or a signature written by the user, or else GHC identifies one of the residual constraints as insoluble (e.g., Int ~ Bool). In other words, the cycle stops when (the relevant part of) the world has become closed instead of open.

Indeed, the motivating example from the previous section emitted the Could not deduce error message only because I entered the expression at the REPL prompt. If I instead were to bound that litVec-without-variadic to a definition, then there would be no error at all without Ill-Formedness Indicators.

> let foo z = litVec 'c' True z
> :info foo
foo :: MkLitVec '[t1, Bool, Char] t2 => t1 -> t2

Such inferred types might prove even more confusing than the bad error message! Eventually an error message would arise, but it will not be local to the actual mistake (just like -XAllowAmbiguousTypes above). Or the user might inspect the inferred type and be confused about its meaning — recall that the library author never intended for them to consider it. The inferred context is not that noisy in this small litVec example, but it would quickly become overwhelmingly dense in more real-world applications involving type-level programming.

With Ill-Formedness Indicators the user instead sees the same bespoke error in either example — they see it whenever they fail to apply variadic to the litVec application, full stop. This is the sort of predictable simple behavior that lets a library author guide their users with in-band advice. In particular, the user sees the error despite the fact that a later call site could apply variadic, thereby eliminating the error altogether — i.e., despite GHC’s open-world assumption.

> let foo z = litVecIFI 'c' True z

<interactive>:33:5: error:Likely accidental use of `litVecIFI' outside of `variadic'!When checking the inferred type
        foo :: forall t1 t2.
               MkLitVecIFI (TypeError ...) '[t1, Bool, Char] t2 =>
               t1 -> t2

In this way, an Ill-Formedness Indicator opts-out of GHC’s open-world assumption, but only for the MkLitVec constraint in the declared signature of litVec.

The low-level mechanics depends on GHC’s special treatment of TypeError applications. In particular, TypeError exhibits negation-as-failure semantics, which is a hallmark of the closed-world assumption. Moreover, beyond a merely insoluble constraint, a TypeError application anywhere within a constraint in an inferred context causes GHC to raise the compile-time error. Ill-Formedness Indicators wouldn’t help nearly as much if TypeError had to be at the head of the stuck constraint (which is what I had been incorrectly assuming would be necessary before reading Kiss’s blog post — and is what the proposed Unsatisfiable family will do), since GHC’s open-world assumption makes it difficult to robustly emit a TypeError-headed constraint only when the constraint is stuck.

Alternatives

The only alternative I am aware of relies on overlapping instances. For my original example, this would require merely one additional instance, using the OVERLAPS pragma.

class MkLitVecOI xs a where mkLitVecOI :: Vec xs -> a

instance {-# OVERLAPS #-} MkLitVecOI (x : xs) a => MkLitVecOI xs (x -> a) where ...

instance {-# OVERLAPS #-} (a ~ Vec xs', VRev xs xs') => MkLitVecOI xs (Variadic a) where ...

-- I could have instead used `OVERLAPPABLE` here, but `OVERLAPS` on the other
-- instances better conveys the intention. In cases where the type class is
-- exported, some well-intentioned but confused expert users may interpret an
-- `OVERLAPPABLE` pragma as an invitation to declare their own instances.
instance TypeError LitVecErrMsg => MkLitVecOI xs a where
    mkLitVecOI = error "unreachable code!"

This approach has one advantage over Ill-Formedness Indicators and a few significant disadvantages. The advantage is that it involves much less noise/boilerplate and especially that it does not need to alter the class definition. The disadvantages are as follows:

  • Overlapping instances is not a trivial language extension; it introduces its own pitfalls, complexities, etc. Those aren’t apparent in this small MkLitVecOI example, but could be in more sophisticated domain-specific logics.
  • The open-world assumption requires that GHC can infer the constraint MkLitVecOI xs a without raising an error, which means this instance-triggered approach could not robustly enforce my simple desideratum of “litVecOI must always occur inside of variadic, full stop”, unless every call site occurred in a definition that has a user-written signature. Such pervasive signatures are often encouraged by standard coding styles, but it’s not guaranteed, and especially not during active development. GHC would most likely eventually raise the error (worst-case: in the library users’ builds), but the greater the number of unascribed definitions between the error message’s location and the mistake, the more confusing the custom error message’s location information would be. And thus I don’t consider that robust.
  • Sometimes the domain-specific logic already relies on overlapping instances. It may deserve more thought, but I haven’t already convinced myself that one can always add a set of overlapping instances that perfectly complement some domain-specific logic’s actual instances when some of those already rely on the overlapping instances language extension.

In my opinion, those disadvantages generally outweigh the advantage of less noise. Moreover, in this case, that MkLitVecIFI class is an internal implementation detail, so noise there wouldn’t affect the user, though it could, in general, make it harder for the developer to maintain the library. On the other hand, the addition of the single err type parameter and its propagation does seem mild to me; especially when the domain-specific constraints are already themselves big, which is likely compared to the intentionally small litVec example.

When to Use It

I anticipate Ill-Formedness Indicators are applicable in any type-level programming that shares at least one of the following characteristics of litVec.

  • I intend for litVec and variadic to feel like a new syntactic construct. If the user applies my combinators in an ill-formed way, then I don’t want GHC to grant it a value or even a type: I’d rather show a “parse error”.

  • I define the combinators’ semantics via a type class, but it is not my intent for the user to ever consider that class: they are not to instantiate it, not to include it in their signatures, etc.

  • Instead, any well-formed use of the combinators should only incur internal constraints that the instances in-scope at the call site fully resolve. Note that the internal constraints do not necessarily need to resolve all the way to True but merely down to a set of non-internal constraints, those that are intended for the user to see. In other words, I want GHC to use a very specific closed-world assumption: it should assume that the call site’s in-scope instances include all the instances that could possibly reduce the internal constraints. While my litVec example doesn’t involve the user declaring new such instances, a more sophisticated use case might intentionally allow for (power) users to do so, and such instances should also be considered at call sites. It’s subtle that the set of relevant instances can exhibit both extensibility and a closed-world assumption, but that’s only necessary in advanced usage.

Concretely, my colleagues at Tweag have identified a few use cases in the wild.

  • Facundo Domínguez notes that the inline-java package uses the OVERLAPPABLE alternative discussed above for its variadic functions. This is usually sufficient, since standard practice involves writing pervasive top-level signatures. But it does invite the occasional unexpectedly inferred Variadic_ constraint when used at the REPL prompt or in uses relying more heavily on inference. An Ill-Formedness Indicator would work even in those scenarios.

  • Alexander Esgen added an Ill-Formedness Indicator in the servant package to improve the user experience when the user tries to use a part of the Servant API for named routes but forgot the required instance of GHC.Generics.Generic.

  • One recent idea, with Christian Georgii, leaned heavily on type-level programming to validate that the user indeed declared a bijection between two enumeration data types. Just like MkLitVec et al, the user is never intended to consider the classes and instances encoding the mathematical definition of a bijection, but the classes show up whenever the user makes a typo, forgets to include a matched pair in the specified bijection, accidentally maps two pre-images to the same image, adds a new constructor to one of the corresponding data types, etc. Ill-Formedness Indicators can be mechanically added to improve the error messages to specify exactly why the user’s expression is not actually a bijection as intended.

Conclusion

Ill-Formedness Indicators enable library authors doing type-level programming to improve their users’ experience when the user applies the library’s combinators in a way that involves more polymorphism than the author intended.

Though an Ill-Formedness Indicator is merely a careful use of TypeError, it’s a clever trick that I hadn’t considered before seeing Csongor Kiss’s blog post, despite being aware for many years of both TypeError and the sorts of disappointing user experiences that Ill-Formedness Indicators address.

I can recommend Ill-Formedness Indicators despite their cleverness because the whole point is that they can be used to hide implementation details — aka “too much cleverness” — from the user experience. My only reservation in recommending it is that TypeError itself is perhaps too immature.

  • I know no way to predict exactly when TypeError will trigger other than combing through the code of GHC’s typechecker (e.g., this commit). This behavior should instead be more explicitly specified in the user guide. An Ill-Formedness Indicator depends on TypeError triggering under certain circumstances; I think those are fundamental to TypeError’s behavior and so unlikely to change, but I can’t be sure without the missing TypeError specification.

  • When GHC detects a TypeError, it suppresses some other flavors of error. In some of my more aggressive experiments with Ill-Formedness Indicators, this has lead to some confusingly opaque custom error messages; because of the Ill-Formedness Indicator’s error message, GHC suppressed the error message that would have helped me identify my actual mistake. My workaround so far has been to remove the DelayTypeError err = TypeError err instance while debugging an Ill-Formedness Indicator message that makes no sense to me. Doing so completely disables the Ill-Formedness Indicators, but it sometimes reveals the other flavors of errors that were being suppressed. This could be more conveniently enabled with a -fdisarm-TypeError flag to GHC, etc. I suppose the tiers of error messages are there for a good reason, but a similar GHC flag, etc. to prevent such error suppression might also help in these cases.

I hope you find Ill-Formedness Indicators useful as you squeeze as much safety and quality of life as possible out of GHC!

July 20, 2023 12:00 AM

July 17, 2023

Philip Wadler

Gradual Effect Handlers

 

As part of the SAGE project funded by Huawei, Li-yao Xia has written in Agda a model of a gradual type system for effects and handlers. It is available on GitHub.


by Philip Wadler (noreply@blogger.com) at July 17, 2023 01:31 PM

July 14, 2023

Sandy Maguire

Certainty by Construction Progress Report 6

The following is a progress report for Certainty by Construction, a new book I’m writing on learning and effectively wielding Agda. Writing a book is a tedious and demoralizing process, so if this is the sort of thing you’re excited about, please do let me know!


Aaaand we’re back. Traveling was nice, but it’s nicer to be home and being productive and making things.

This week I did a lot of work on the isomorphisms chapter. First and foremost, I proved that everything I knew about cardinalities from the Curry-Howard isomorphism held true. That is, that sum types add the cardinalities of their constituent types, product types multiply them, and by far the hardest to prove, that functions act as exponentials.

Going through the work of that taught me that I haven’t really internalized everything I ought to have regarding setoids, since I originally framed the problem wrong and needed Reed to help sort me out. There is some material in this chapter about building the relevant setoids for all of the necessary types, which sucks and would be better to avoid. I’m unsure if it will get moved out to the setoid chapter, or if I’ll just give a sketch in the final version, or maybe if it just gets left where it is.

For me, the motivating use case behind the algebra of types is to find different representations of things, ones with better computational properties. This turns out to be extremely easy to exploit in Haskell, but upon trying to write about it, I realized how much magic the Haskell runtime is doing in order to make that happen. It’s amazing that I’m still managing to trick myself into thinking I understand things, even after working on this book for nearly a year. But I suppose that’s the whole purpose!

So anyway, that section isn’t yet finished, but I think one more week will be enough to tie it together. And at that point, I’ve written everything I intend to, and will spend the remainder of my project time on editing, rewriting, cleaning up, and tackling the weird typesetting problems that remain. The end is nigh!


That’s all for today. If you’ve already bought the book, you can get the updates for free on Leanpub. If you haven’t, might I suggest doing so? Your early support and feedback helps inspire me and ensure the book is as good as it can possibly be.

July 14, 2023 12:00 AM

July 13, 2023

Brent Yorgey

Compiling to Intrinsically Typed Combinators

tl;dr: How to compile a functional language via combinators (and evaluate via the Haskell runtime) while keeping the entire process type-indexed, with a bibliography and lots of references for further reading

There is a long history, starting with Schönfinkel and Curry, of abstracting away variable names from lambda calculus terms by converting to combinators, aka bracket abstraction. This was popular in the 80’s as a compilation technique for functional languages (Turner, 1979; Augustsson, 1986; Jones, 1987; Diller, 1988), then apparently abandoned. More recently, however, it has been making a bit of a comeback. For example, see Naylor (2008), Gratzer (2015), Lynn (2017), and Mahler (2021). Bracket abstraction is intimately related to compiling to cartesian closed categories (Elliott, 2017; Mahler, 2021), and also enables cool tricks like doing evaluation via the Haskell runtime system (Naylor, 2008; Seo, 2016; Mahler, 2022).

However, it always bothered me that the conversion to combinators was invariably described in an untyped way. Partly to gain some assurance that we are doing things correctly, but mostly for fun, I wondered if it would be possible to do the whole pipeline in an explicitly type-indexed way. I eventually found a nice paper by Oleg Kiselyov (2018) which explains exactly how to do it (it even came with OCaml code that I was easily able to port to Haskell!).

In this blog post, I:

  • Show an example of typechecking and elaboration for a functional language into explicitly type-indexed terms, such that it is impossible to write down ill-typed terms
  • Demonstrate a Haskell port of Oleg Kiselyov’s typed bracket abstraction algorithm
  • Demonstrate type-indexed evaluation of terms via the Haskell runtime
  • Put together an extensive bibliography with references for further reading

This blog post is rendered automatically from a literate Haskell file; you can find the complete working source code and blog post on GitHub. I’m always happy to receive comments, fixes, or suggestions for improvement.

But First, A Message From Our Sponsors

So many yummy language extensions.

{-# LANGUAGE ConstraintKinds #-}
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE ExplicitForAll #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE ImportQualifiedPost #-}
{-# LANGUAGE InstanceSigs #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE PatternSynonyms #-}
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE TypeApplications #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE UnicodeSyntax #-}
{-# LANGUAGE ViewPatterns #-}

module TypedCombinators where

import Control.Monad.Combinators.Expr
import Data.Functor.Const qualified as F
import Data.Void
import Data.Text ( Text )
import Data.Text qualified as T
import Data.Kind (Type)
import Data.Type.Equality ( type (:~:)(Refl), TestEquality(..) )
import Text.Megaparsec
import Text.Megaparsec.Char
import Text.Megaparsec.Char.Lexer qualified as L
import Witch (into)
import Prelude hiding (lookup)

Raw terms and types

Here’s an algebraic data type to represent raw terms of our DSL, something which might come directly out of a parser. The exact language we use here isn’t all that important; I’ve put in just enough features to make it nontrivial, but not much beyond that. We have integer literals, variables, lambdas, application, let and if expressions, addition, and comparison with >. Of course, it would be easy to add more types, constants, and language features.

data Term where
  Lit :: Int -> Term
  Var :: Text -> Term
  Lam :: Text -> Ty -> Term -> Term
  App :: Term -> Term -> Term
  Let :: Text -> Term -> Term -> Term
  If  :: Term -> Term -> Term -> Term
  Add :: Term -> Term -> Term
  Gt  :: Term -> Term -> Term
  deriving Show

A few things to note:

  • In order to keep things simple, notice that lambdas must be annotated with the type of the argument. There are other choices we could make, but this is the simplest for now. I’ll have more to say about other choices later.

  • I included if not only because it gives us something to do with Booleans, but also because it is polymorphic, which adds an interesting twist to our typechecking.

  • I included >, not only because it gives us a way to produce Boolean values, but also because it uses ad-hoc polymorphism, that is, we can compare at any type which is an instance of Ord. This is an even more interesting twist.

Here are our types: integers, booleans, and functions.

data Ty where
  TyInt  :: Ty
  TyBool :: Ty
  TyFun  :: Ty -> Ty -> Ty
  deriving Show

Finally, here’s an example term that uses all the features of our language (I’ve included a simple parser in an appendix at the end of this post):

example :: Term
example = readTerm $ T.unlines
  [ "let twice = \\f:Int -> Int. \\x:Int. f (f x) in"
  , "let z = 1 in"
  , "if 7 > twice (\\x:Int. x + 3) z then z else z + 1"
  ]

Since 7 is not, in fact, strictly greater than 1 + 3 + 3, this should evaluate to 2.

Type-indexed constants

That was the end of our raw, untyped representations—from now on, everything is going to be type-indexed! First of all, we’ll declare an enumeration of constants, with each constant indexed by its corresponding host language type. These will include both any special language built-ins (like if, +, and >) as well as a set of combinators which we’ll be using as a compilation target—more on these later.

data Const :: Type -> Type where
  CInt :: Int -> Const Int
  CIf :: Const (Bool -> α -> α -> α)
  CAdd :: Const (Int -> Int -> Int)
  CGt :: Ord α => Const (α -> α -> Bool)
  I :: Const (α -> α)
  K :: Const (α -> b -> α)
  S :: Const ((α -> b -> c) -> (α -> b) -> α -> c)
  B :: Const ((     b -> c) -> (α -> b) -> α -> c)
  C :: Const ((α -> b -> c) ->       b  -> α -> c)

deriving instance Show (Const α)

The polymorphism of if (and the combinators I, K, etc., for that matter) poses no real problems. If we really wanted the type of CIf to be indexed by the exact type of if, it would be something like

  CIf :: Const ( α. Bool -> α -> α -> α)

but this would require impredicative types which can be something of a minefield. However, what we actually get is

  CIf ::  α. Const (Bool -> α -> α -> α)

which is unproblematic and works just as well for our purposes.

The type of CGt is more interesting: it includes an Ord α constraint. That means that at the time we construct a CGt value, we must have in scope an Ord instance for whatever type α is; conversely, when we pattern-match on CGt, we will bring that instance into scope. We will see how to deal with this later.

For convenience, we make a type class HasConst for type-indexed things that can contain embedded constants (we will end up with several instances of this class).

class HasConst t where
  embed :: Const α -> t α

Also for convenience, here’s a type class for type-indexed things that support some kind of application operation. (Note that we don’t necessarily want to require t to support a pure :: a -> t a operation, or even be a Functor, so using Applicative would not be appropriate, even though $$ has the same type as <*>.)

infixl 1 $$
class Applicable t where
  ($$) :: t (α -> β) -> t α -> t β

Note that, unlike the standard $ operator, $$ is left-associative, so, for example, f $$ x $$ y should be read just like f x y, that is, f $$ x $$ y = (f $$ x) $$ y.

Finally, we’ll spend a bunch of time applying constants to things, or applying things to constants, so here are a few convenience operators for combining $$ and embed:

infixl 1 .$$
(.$$) :: (HasConst t, Applicable t) => Const (α -> β) -> t α -> t β
c .$$ t = embed c $$ t

infixl 1 $$.
($$.) :: (HasConst t, Applicable t) => t (α -> β) -> Const α -> t β
t $$. c = t $$ embed c

infixl 1 .$$.
(.$$.) :: (HasConst t, Applicable t) => Const (α -> β) -> Const α -> t β
c1 .$$. c2 = embed c1 $$ embed c2

Type-indexed types and terms

Now let’s build up our type-indexed core language. First, we’ll need a data type for type-indexed de Bruijn indices. A value of type Idx γ α is a variable with type α in the context γ (represented as a type-level list of types). For example, Idx [Int,Bool,Int] Int would represent a variable of type Int (and hence must either be variable 0 or 2).

data Idx :: [Type] -> Type -> Type where
  VZ :: Idx (α ': γ) α
  VS :: Idx γ α -> Idx (β ': γ) α

deriving instance Show (Idx γ α)

Now we can build our type-indexed terms. Just like variables, terms are indexed by a typing context and a type; t : TTerm γ α can be read as “t is a term with type α, possibly containing variables whose types are described by the context γ”. Our core language has only variables, constants, lambdas, and application. Note we’re not just making a type-indexed version of our original term language; for simplicity, we’re going to simultaneously typecheck and elaborate down to this much simpler core language. (Of course, it would also be entirely possible to introduce another intermediate data type for type-indexed terms, and separate the typechecking and elaboration phases.)

data TTerm :: [Type] -> Type -> Type where
  TVar :: Idx γ α -> TTerm γ α
  TConst :: Const α -> TTerm γ α
  TLam :: TTerm (α ': γ) β -> TTerm γ (α -> β)
  TApp :: TTerm γ (α -> β) -> TTerm γ α -> TTerm γ β

deriving instance Show (TTerm γ α)

instance Applicable (TTerm γ) where
  ($$) = TApp

instance HasConst (TTerm γ) where
  embed = TConst

Now for some type-indexed types!

data TTy :: Type -> Type where
  TTyInt :: TTy Int
  TTyBool :: TTy Bool
  (:->:) :: TTy α -> TTy β -> TTy (α -> β)

deriving instance Show (TTy ty)

TTy is a term-level representation of our DSL’s types, indexed by corresponding host language types. In other words, TTy is a singleton: for a given type α there is a single value of type TTy α. Put another way, pattern-matching on a value of type TTy α lets us learn what the type α is. (See (Le, 2017) for a nice introduction to the idea of singleton types.)

We will need to be able to test two value-level type representations for equality and have that reflected at the level of type indices; the TestEquality class from Data.Type.Equality is perfect for this. The testEquality function takes two type-indexed things and returns a type equality proof wrapped in Maybe.

instance TestEquality TTy where
  testEquality :: TTy α -> TTy β -> Maybe (α :~: β)
  testEquality TTyInt TTyInt = Just Refl
  testEquality TTyBool TTyBool = Just Refl
  testEquality (α₁ :->: β₁) (α₂ :->: β₂) =
    case (testEquality α₁ α₂, testEquality β₁ β₂) of
      (Just Refl, Just Refl) -> Just Refl
      _ -> Nothing
  testEquality _ _ = Nothing

Recall that the CGt constant requires an Ord instance; the checkOrd function pattern-matches on a TTy and witnesses the fact that the corresponding host-language type has an Ord instance (if, in fact, it does).

checkOrd :: TTy α -> (Ord α => r) -> Maybe r
checkOrd TTyInt r = Just r
checkOrd TTyBool r = Just r
checkOrd _ _ = Nothing

As a quick aside, for simplicity’s sake, I am going to use Maybe throughout the rest of this post to indicate possible failure. In a real implementation, one would of course want to return more information about any error(s) that occur.

Existential wrappers

Sometimes we will need to wrap type-indexed things inside an existential wrapper to hide the type index. For example, when converting from a Ty to a TTy, or when running type inference, we can’t know in advance which type we’re going to get. So we create the Some data type which wraps up a type-indexed thing along with a corresponding TTy. Pattern-matching on the singleton TTy will allow us to recover the type information later.

data Some :: (Type -> Type) -> Type where
  Some :: TTy α -> t α -> Some t

mapSome :: ( α. s α -> t α) -> Some s -> Some t
mapSome f (Some α t) = Some α (f t)

The first instantiation we’ll create is an existentially wrapped type, where the TTy itself is the only thing we care about, and the corresponding t will just be the constant unit type functor. It would be annoying to keep writing F.Const () everywhere so we create some type and pattern synonyms for convenience.

type SomeTy = Some (F.Const ())

pattern SomeTy :: TTy α -> SomeTy
pattern SomeTy α = Some α (F.Const ())
{-# COMPLETE SomeTy #-}

The someType function converts from a raw Ty to a type-indexed TTy, wrapped up in an existential wrapper.

someType :: Ty -> SomeTy
someType TyInt = SomeTy TTyInt
someType TyBool = SomeTy TTyBool
someType (TyFun a b) = case (someType a, someType b) of
  (SomeTy α, SomeTy β) -> SomeTy (α :->: β)

Type inference and elaboration

Now that we have our type-indexed core language all set, it’s time to do type inference, that is, translate from untyped terms to type-indexed ones! First, let’s define type contexts, i.e. mappings from variables to their types. We store contexts simply as a (fancy, type-indexed) list of variable names paired with their types. This is inefficient—it takes linear time to do a lookup—but we don’t care, because this is an intermediate representation used only during typechecking. By the time we actually get around to running terms, variables won’t even exist any more.

data Ctx :: [Type] -> Type where

  -- CNil represents an empty context.
  CNil :: Ctx '[]

  -- A cons stores a variable name and its type,
  -- and then the rest of the context.
  (:::) :: (Text, TTy α) -> Ctx γ -> Ctx (α ': γ)

Now we can define the lookup function, which takes a variable name and a context and tries to return a corresponding de Bruijn index into the context. When looking up a variable name in the context, we can’t know in advance what index we will get and what type it will have, so we wrap the returned Idx in Some.

lookup :: Text -> Ctx γ -> Maybe (Some (Idx γ))
lookup _ CNil = Nothing
lookup x ((y, α) ::: ctx)
  | x == y = Just (Some α VZ)
  | otherwise = mapSome VS <$> lookup x ctx

Now we’re finally ready to define the infer function! It takes a type context and a raw term, and tries to compute a corresponding type-indexed term. Note that there’s no particular guarantee that the term we return corresponds to the input term—we will just have to be careful—but at least the Haskell type system guarantees that we can’t return a type-incorrect term, which is especially important when we have some nontrivial elaboration to do. Of course, just as with variable lookups, when inferring the type of a term we can’t know in advance what type it will have, so we will need to return an existential wrapper around a type-indexed term.

infer :: Ctx γ -> Term -> Maybe (Some (TTerm γ))
infer ctx = \case

To infer the type of a literal integer value, just return TTyInt with a literal integer constant.

  Lit i -> return $ Some TTyInt (embed (CInt i))

To infer the type of a variable, look it up in the context and wrap the result in TVar. Notice how we are allowed to pattern-match on the Some returned from lookup (revealing the existentially quantified type inside) since we immediately wrap it back up in another Some when returning the TVar.

  Var x -> mapSome TVar <$> lookup x ctx

To infer the type of a lambda, we convert the argument type annotation to a type-indexed type, infer the type of the body under an extended context, and then return a lambda with an appropriate function type. (If lambdas weren’t required to have type annotations, then we would either have to move the lambda case to the check function, or else use unification variables and solve type equality constraints. The former would be straightforward, but I don’t know how to do the latter in a type-indexed way—sounds like a fun problem for later.)

  Lam x a t -> do
    case someType a of
      Some α _ -> do
        Some β t' <- infer ((x,α) ::: ctx) t
        return $ Some (α :->: β) (TLam t')

To infer the type of an application, we infer the type of the left-hand side, ensure it is a function type, and check that the right-hand side has the correct type. We will see the check function later.

  App t1 t2 -> do
    Some τ t1' <- infer ctx t1
    case τ of
      α :->: β -> do
        t2' <- check ctx α t2
        return $ Some β (TApp t1' t2')
      _ -> Nothing

To infer the type of a let-expression, we infer the type of the definition, infer the type of the body under an extended context, and then desugar it into an application of a lambda. That is, let x = t1 in t2 desugars to (\x.t2) t1.

  Let x t1 t2 -> do
    Some α t1' <- infer ctx t1
    Some β t2' <- infer ((x, α) ::: ctx) t2
    return $ Some β (TApp (TLam t2') t1')

Note again that we can’t accidentally get mixed up here—for example, if we incorrectly desugar to (\x.t1) t2 we get a Haskell type error, like this:

    • Couldn't match type ‘γ’ with ‘α : γ’
      Expected: TTerm γ α1
        Actual: TTerm (α : γ) α1

To infer an if-expression, we can check that the test has type Bool, infer the types of the two branches, and ensure that they are the same. If so, we return the CIf constant applied to the three arguments. The reason this typechecks is that pattern-matching on the Refl from the testEquality call brings into scope the fact that the types of t2 and t3 are equal, so we can apply CIf which requires them to be so.

  If t1 t2 t3 -> do
    t1' <- check ctx TTyBool t1
    Some α t2' <- infer ctx t2
    Some β t3' <- infer ctx t3
    case testEquality α β of
      Nothing -> Nothing
      Just Refl -> return $ Some α (CIf .$$ t1' $$ t2' $$ t3')

Addition is simple; we just check that both arguments have type Int.

  Add t1 t2 -> do
    t1' <- check ctx TTyInt t1
    t2' <- check ctx TTyInt t2
    return $ Some TTyInt (CAdd .$$ t1' $$ t2')

“Greater than” is a bit interesting because we allow it to be used at both Int and Bool. So, just as with if, we must infer the types of the arguments and check that they match. But then we must also use the checkOrd function to ensure that the argument types are an instance of Ord. In particular, we wrap CGt (which requires an Ord constraint) in a call to checkOrd α (which provides one).

  Gt t1 t2 -> do
    Some α t1' <- infer ctx t1
    Some β t2' <- infer ctx t2
    case testEquality α β of
      Nothing -> Nothing
      Just Refl -> (\c -> Some TTyBool (c .$$ t1' $$ t2')) <$> checkOrd α CGt

Finally, here’s the check function: to check that an expression has an expected type, just infer its type and make sure it’s the one we expected. (With more interesting languages we might also have more cases here for terms which can be checked but not inferred.) Notice how this also allows us to return the type-indexed term without using an existential wrapper, since the expected type is an input.

check :: Ctx γ -> TTy α -> Term -> Maybe (TTerm γ α)
check ctx α t = do
  Some β t' <- infer ctx t
  case testEquality α β of
    Nothing -> Nothing
    Just Refl -> Just t'

Putting this all together so far, we can check that the example term has type Int and see what it elaborates to (I’ve included a simple pretty-printer for TTerm in an appendix):

λ> putStrLn . pretty . fromJust . check CNil TTyInt $ example
(λ. (λ. if (gt 7 (x1 (λ. plus x0 3) x0)) x0 (plus x0 1)) 1) (λ. λ. x1 (x1 x0))

An aside: a typed interpreter

We can now easily write an interpreter. However, this is pretty inefficient (it has to carry around an environment and do linear-time variable lookups), and later we’re going to compile our terms directly to host language terms. So this interpreter is just a nice aside, for fun and testing.

With that said, given a closed term, we can interpret it directly to a value of its corresponding host language type. We need typed environments and a indexing function (note that for some reason GHC can’t see that the last case of the indexing function is impossible; if we tried implementing it in, say, Agda, we wouldn’t have to write that case).

data Env :: [Type] -> Type where
  ENil :: Env '[]
  ECons :: α -> Env γ -> Env (α ': γ)

(!) :: Env γ -> Idx γ α -> α
(ECons x _) ! VZ = x
(ECons _ e) ! (VS x) = e ! x
ENil ! _ = error "GHC can't tell this is impossible"

Now the interpreter is straightforward. Look how beautifully everything works out with the type indexing.

interpTTerm :: TTerm '[] α -> α
interpTTerm = go ENil
  where
    go :: Env γ -> TTerm γ α -> α
    go e = \case
      TVar x -> e ! x
      TLam body -> \x -> go (ECons x e) body
      TApp f x -> go e f (go e x)
      TConst c -> interpConst c

interpConst :: Const α -> α
interpConst = \case
  CInt i -> i
  CIf -> \b t e -> if b then t else e
  CAdd -> (+)
  CGt -> (>)
  K -> const
  S -> (<*>)
  I -> id
  B -> (.)
  C -> flip
λ> interpTTerm . fromJust . check CNil TTyInt $ example
2

Compiling to combinators: type-indexed bracket abstraction

Now, on with the main attraction! It’s well-known that certain sets of combinators are Turing-complete: for example, SKI is the most well-known complete set (or just SK if you’re trying to be minimal). There are well-known algorithms for compiling lambda calculus terms into combinators, known generally as bracket abstraction (for further reading about bracket abstraction in general, see Diller (2014); for some in-depth history along with illustrative Haskell code, see Ben Lynn’s page on Combinatory Logic (2022); for nice example implementations in Haskell, see blog posts by Gratzer (2015), Seo (2016), and Mahler (2021).)

So the idea is to compile our typed core language down to combinators. The resulting terms will have no lambdas or variables—only constants and application! The point is that by making environments implicit, with a few more tricks we can make use of the host language runtime’s ability to do beta reduction, which will be much more efficient than our interpreter.

The BTerm type below will be the compilation target. Again for illustration and/or debugging we can easily write a direct interpreter for BTerm—but this still isn’t the intended code path. There will still be one more step to convert BTerms directly into host language terms.

data BTerm :: Type -> Type where
  BApp :: BTerm (α -> β) -> BTerm α -> BTerm β
  BConst :: Const α -> BTerm α

deriving instance Show (BTerm ty)

instance Applicable BTerm where
  ($$) = BApp

instance HasConst BTerm where
  embed = BConst

interpBTerm :: BTerm ty -> ty
interpBTerm (BApp f x) = interpBTerm f (interpBTerm x)
interpBTerm (BConst c) = interpConst c

We will use the usual SKI combinators as well as B and C, which are like special-case variants of S:

  • S x y z = x z (y z)
  • B x y z = x (y z)
  • C x y z = x z (y )

S handles the application of x to y in the case where they both need access to a shared parameter z; B and C are similar, but B is used when only y, and not x, needs access to z, and C is for when only x needs access to z. Using B and C will allow for more efficient encodings than would be possible with S alone. If you want to compile a language with recursion you can also easily add the usual Y combinator (“SICKBY”), although the example language in this post has no recursion so we won’t use it.

Bracket abstraction is often presented in an untyped way, but I found this really cool paper by Oleg Kiselyov (2018) where he shows how to do bracket abstraction in a completely compositional, type-indexed way. I found the paper a bit hard to understand, but fortunately it came with working OCaml code! Translating it to Haskell was straightforward. Much later, after writing most of this blog post, I found a a nice explanation of Kiselyov’s paper by Lynn (2022) which helped me make more sense of the paper.

First, a data type for open terms, which represent an intermediate stage in the bracket abstraction algorithm, where some parts have been converted to closed combinator terms (the E constructor embeds BTerm values), and some parts still have not. This corresponds to Kiselyov’s eta-optimized version (section 4.1 of the paper). A simplified version that does not include V is possible, but results in longer combinator expressions.

data OTerm :: [Type] -> Type -> Type where

  -- E contains embedded closed (i.e. already abstracted) terms.
  E :: BTerm α -> OTerm γ α

  -- V represents a reference to the innermost/top environment
  -- variable, i.e. Z
  V :: OTerm (α ': γ) α

  -- N represents internalizing the innermost bound variable as a
  -- function argument. In other words, we can represent an open
  -- term referring to a certain variable as a function which
  -- takes that variable as an argument.
  N :: OTerm γ (α -> β) -> OTerm (α ': γ) β

  -- For efficiency, there is also a special variant of N for the
  -- case where the term does not refer to the topmost variable at
  -- all.
  W :: OTerm γ β -> OTerm (α ': γ) β

instance HasConst (OTerm γ) where
  embed = E . embed

Now for the bracket abstraction algorithm. First, a function to do type- and environment-preserving conversion from TTerm to OTerm. The conv function handles the variable, lambda, and constant cases. The application case is handled by the Applicable instance.

conv :: TTerm γ α -> OTerm γ α
conv = \case
  TVar VZ -> V
  TVar (VS x) -> W (conv (TVar x))
  TLam t -> case conv t of
    V -> E (embed I)
    E d -> E (K .$$ d)
    N e -> e
    W e -> K .$$ e
  TApp t1 t2 -> conv t1 $$ conv t2
  TConst c -> embed c

The Applicable instance for OTerm has 15 cases—one for each combination of OTerm constructors. Why not 16, you ask? Because the V $$ V case is impossible (exercise for the reader: why?). The cool thing is that GHC can tell that case would be ill-typed, and agrees that this definition is total—that is, it does not give a non-exhaustive pattern match warning. This is a lot of code, but understanding each individual case is not too hard if you understand the meaning of the constructors E, V, N, and W. For example, if we have one term that ignores the innermost bound variable being applied to another term that also ignores the innermost bound variable (W e1 $$ W e2), we can apply one term to the other and wrap the result in W again (W (e1 $$ e2)). Other cases use the combinators B, C, S to route the input to the proper places in an application.

instance Applicable (OTerm γ) where
  ($$) :: OTerm γ (α -> β) -> OTerm γ α -> OTerm γ β
  W e1 $$ W e2 = W (e1 $$ e2)
  W e $$ E d = W (e $$ E d)
  E d $$ W e = W (E d $$ e)
  W e $$ V = N e
  V $$ W e = N (E (C .$$. I) $$ e)
  W e1 $$ N e2 = N (B .$$ e1 $$ e2)
  N e1 $$ W e2 = N (C .$$ e1 $$ e2)
  N e1 $$ N e2 = N (S .$$ e1 $$ e2)
  N e $$ V = N (S .$$ e $$. I)
  V $$ N e = N (E (S .$$. I) $$ e)
  E d $$ N e = N (E (B .$$ d) $$ e)
  E d $$ V = N (E d)
  V $$ E d = N (E (C .$$. I $$ d))
  N e $$ E d = N (E (C .$$. C $$ d) $$ e)
  E d1 $$ E d2 = E (d1 $$ d2)

The final bracket abstraction algorithm consists of calling conv on a closed TTerm—this must result in a term of type OTerm '[] α, and the only constructor which could possibly produce such a type is E, containing an embedded BTerm. So we can just extract that BTerm, and GHC can see that this is total.

bracket :: TTerm '[] α -> BTerm α
bracket t = case conv t of { E t' -> t' }

Let’s apply this to our example term and see what we get:

λ> putStrLn . pretty . bracket . fromJust . check CNil TTyInt $ example
C C 1 (C C (C C 1 plus) (B S (C C I (B S (B (B if) (B (B (gt 7)) (C I (C C 3 plus)))))))) (S B I)
λ> interpBTerm . bracket . fromJust . check CNil TTyInt $ example
2

Neat! This is not too much longer than the original term, which is the point of using the optimized version. Interestingly, this example happens to not use K at all, but a more complex term certainly would.

Kiselyov also presents an even better algorithm using n-ary combinators which uses guaranteed linear time and space. For simplicity, he presents it in an untyped way and claims in passing that it “can be backported to the typed case”, though I am not aware of anyone who has actually done this yet (perhaps I will, later). Lynn (2022) has a nice explanation of Kiselyov’s paper, including a section that explores several alternatives to Kiselyov’s linear-time algorithm.

Compiling type-indexed combinators to Haskell

So at this point we can take a Term, typecheck it to produce a TTerm, then use bracket abstraction to convert that to a BTerm. We have an interpreter for BTerms, but we’re instead going to do one more compilation step, to turn BTerms directly into native Haskell values. This idea originates with Naylor (2008) and is well-explained in blog posts by Seo (2016) and Mahler (2022). This still feels a little like black magic to me, and I am actually unclear on whether it is really faster than calling interpBTerm; some benchmarking would be needed. In any case I include it here for completeness.

Our target for this final compilation step is the following CTerm type, which has only functions, represented by CFun, and constants. Note, however, that CConst is intended to be used only for non-function types, i.e. base types, although there’s no nice way (that I know of, at least) to use the Haskell type system to enforce this.

data CTerm α where
  CFun :: (CTerm α -> CTerm β) -> CTerm (α -> β)
  CConst :: α -> CTerm α -- CConst invariant: α is not a function type

instance Applicable CTerm where
  CFun f $$ x = f x
  CConst _ $$ _ = error "CConst should never contain a function!"

compile :: BTerm α -> CTerm α
compile (BApp b1 b2) = compile b1 $$ compile b2
compile (BConst c) = compileConst c

compileConst :: Const α -> CTerm α
compileConst = \case
  (CInt i) -> CConst i
  CIf      -> CFun $ \(CConst b) -> CFun $ \t -> CFun $ \e -> if b then t else e
  CAdd     -> binary (+)
  CGt      -> binary (>)
  K        -> CFun $ \x -> CFun $ \_ -> x
  S        -> CFun $ \f -> CFun $ \g -> CFun $ \x -> f $$ x $$ (g $$ x)
  I        -> CFun id
  B        -> CFun $ \f -> CFun $ \g -> CFun $ \x -> f $$ (g $$ x)
  C        -> CFun $ \f -> CFun $ \x -> CFun $ \y -> f $$ y $$ x

binary :: (α -> b -> c) -> CTerm (α -> b -> c)
binary op = CFun $ \(CConst x) -> CFun $ \(CConst y) -> CConst (op x y)

Finally, we can “run” a CTerm α to extract a value of type α. Typically, if α is some kind of base type like Int, runCTerm doesn’t actually do any work—all the work is done by the Haskell runtime itself. However, for completeness, I include a case for CFun as well.

runCTerm :: CTerm α -> α
runCTerm (CConst a) = a
runCTerm (CFun f) = runCTerm . f . CConst

We can put this all together into our final pipeline:

evalInt :: Term -> Maybe Int
evalInt = fmap (runCTerm . compile . bracket) . check CNil TTyInt
λ> evalInt example
Just 2

Appendices

There’s nothing interesting to see here—unless you’ve never written a parser or pretty-printer before, in which case perhaps it is very interesting! If you want to learn how to write parsers, see this very nice Megaparsec tutorial. And see here for some help writing a basic pretty-printer.

Parsing

type Parser = Parsec Void Text
type ParserError = ParseErrorBundle Text Void

reservedWords :: [Text]
reservedWords = ["let", "in", "if", "then", "else", "Int", "Bool"]

sc :: Parser ()
sc = L.space space1 (L.skipLineComment "--") empty

lexeme :: Parser a -> Parser a
lexeme = L.lexeme sc

symbol :: Text -> Parser Text
symbol = L.symbol sc

reserved :: Text -> Parser ()
reserved w = (lexeme . try) $ string' w *> notFollowedBy alphaNumChar

identifier :: Parser Text
identifier = (lexeme . try) (p >>= nonReserved) <?> "variable name"
 where
  p = (:) <$> letterChar <*> many alphaNumChar
  nonReserved (into @Text -> t)
    | t `elem` reservedWords =
        fail . into @String $
          T.concat ["reserved word '", t, "' cannot be used as variable name"]
    | otherwise = return t

integer :: Parser Int
integer = lexeme L.decimal

parens :: Parser a -> Parser a
parens = between (symbol "(") (symbol ")")

parseTermAtom :: Parser Term
parseTermAtom =
      Lit <$> integer
  <|> Var <$> identifier
  <|> Lam <$> (symbol "\\" *> identifier)
          <*> (symbol ":" *> parseType)
          <*> (symbol "." *> parseTerm)
  <|> Let <$> (reserved "let" *> identifier)
          <*> (symbol "=" *> parseTerm)
          <*> (reserved "in" *> parseTerm)
  <|> If  <$> (reserved "if" *> parseTerm)
          <*> (reserved "then" *> parseTerm)
          <*> (reserved "else" *> parseTerm)
  <|> parens parseTerm

parseTerm :: Parser Term
parseTerm = makeExprParser parseTermAtom
  [ [InfixL (App <$ symbol "")]
  , [InfixL (Add <$ symbol "+")]
  , [InfixL (Gt <$ symbol ">")]
  ]

parseTypeAtom :: Parser Ty
parseTypeAtom =
  TyInt <$ reserved "Int"
  <|> TyBool <$ reserved "Bool"
  <|> parens parseType

parseType :: Parser Ty
parseType = makeExprParser parseTypeAtom
  [ [InfixR (TyFun <$ symbol "->")] ]

readTerm :: Text -> Term
readTerm = either undefined id . runParser parseTerm ""

Pretty-printing

type Prec = Int

class Pretty p where
  pretty :: p -> String
  pretty = prettyPrec 0

  prettyPrec :: Prec -> p -> String
  prettyPrec _ = pretty

mparens :: Bool -> String -> String
mparens True  = ("("++) . (++")")
mparens False = id

instance Pretty (Const α) where
  prettyPrec _ = \case
    CInt i -> show i
    CIf -> "if"
    CAdd -> "plus"
    CGt  -> "gt"
    c -> show c

instance Pretty (Idx γ α) where
  prettyPrec _ = ("x" ++) . show . toNat
    where
      toNat :: Idx γ α -> Int
      toNat VZ = 0
      toNat (VS i) = 1 + toNat i

instance Pretty (TTerm γ α) where
  prettyPrec p = \case
    TVar x -> pretty x
    TConst c -> pretty c
    TLam t -> mparens (p>0) $ "λ. " ++ prettyPrec 0 t
    TApp t1 t2 -> mparens (p>1) $
      prettyPrec 1 t1 ++ " " ++ prettyPrec 2 t2

instance Pretty (BTerm α) where
  prettyPrec p = \case
    BConst c -> pretty c
    BApp t1 t2 -> mparens (p>0) $
      prettyPrec 0 t1 ++ " " ++ prettyPrec 1 t2

References

Augustsson, L. (1986) SMALL: A small interactive functional system. Chalmers Tekniska Högskola/Göteborgs Universitet. Programming Methodology Group.
Diller, A. (1988) Compiling functional languages. John Wiley & Sons, Inc.
Diller, A. (2014) Bracket abstraction algorithms. Available at: https://www.cantab.net/users/antoni.diller/brackets/intro.html.
Elliott, C. (2017) ‘Compiling to categories’, Proceedings of the ACM on Programming Languages, 1(ICFP), pp. 1–27.
Gratzer, D. (2015) Bracket Abstraction: The Smallest PL You’ve Ever Seen. Available at: https://jozefg.bitbucket.io/posts/2015-05-01-brackets.html.
Jones, S.L.P. (1987) The Implementation of Functional Programming Languages. Prentice-Hall, Inc.
Kiselyov, O. (2018) λ to ski, semantically: Declarative pearl’, in International symposium on functional and logic programming. Springer, pp. 33–50.
Le, J. (2017) Introduction to Singletons (Part 1). Available at: https://blog.jle.im/entry/introduction-to-singletons-1.html.
Lynn, B. (2017) A Combinatory Compiler. Available at: https://crypto.stanford.edu/~blynn/lambda/sk.html.
Lynn, B. (2022) Combinatory Logic. Available at: https://crypto.stanford.edu/~blynn/lambda/cl.html.
Lynn, B. (2022) Kiselyov Combinator Translation. Available at: https://crypto.stanford.edu/~blynn/lambda/kiselyov.html.
Mahler, T. (2021) Implementing a Functional Language with Graph Reduction. Available at: https://thma.github.io/posts/2021-12-27-Implementing-a-functional-language-with-Graph-Reduction.html.
Mahler, T. (2021) λ-Calculus, Combinatory Logic and Cartesian Closed Categories. Available at: https://thma.github.io/posts/2021-04-04-Lambda-Calculus-Combinatory-Logic-and-Cartesian-Closed-Categories.html.
Mahler, T. (2022) Evaluating SKI combinators as native Haskell functions. Available at: https://thma.github.io/posts/2022-02-05-Evaluating-SKI-combinators-as-native-Haskell-functions.html.
Naylor, M. (2008) Evaluating Haskell in Haskell, The Monad.Reader. Edited by W. Swierstra, Issue 10.
Seo, K.Y. (2016) Write you an interpreter. Available at: http://kseo.github.io/posts/2016-12-30-write-you-an-interpreter.html.
Turner, D.A. (1979) ‘A new implementation technique for applicative languages’, Software: Practice and Experience, 9(1), pp. 31–49.

by Brent at July 13, 2023 08:55 PM

Tweag I/O

Python Monorepo: an Example. Part 2: A Simple CI

For a software team to be successful, you need excellent communication. That is why we want to build systems that foster cross-team communication, and using a monorepo is an excellent way to do that. However, designing a monorepo can be challenging as it impacts the development workflow of all engineers and comes with its own scaling challenges. Special care for tooling is required for a monorepo to stay performant as a team grows.

In our previous article, we described our choice of structure and tools to bootstrap a Python monorepo. In this post, we continue by describing the continuous integration system (CI). We made a GitHub template which you can use to bootstrap your own monorepo. Check it out on Tweag’s GitHub organization in the python-monorepo-example repository.

We exemplify our CI in the context of a GitHub repository. Our solution, however, is not GitHub-specific by any means: GitLab or Jenkins could also work with the same approach.

Before diving into the details, we would like to acknowledge the support we had from our client Kaiko, for which we did most of the work described in this series of blog posts.

Goals for Our CI

Teams use a CI pipeline to ensure a high level of quality in their work. Upon changes to the code, the CI runs a list of commands to check quality. In our case we want to:

  • check that the code is well formatted;
  • lint the code, i.e., adhere to some standards;
  • check the typing;
  • run tests.

Because our series is focusing on bootstrapping a monorepo for an early startup (from day 0 of a startup until approximately the end of the first year), we don’t describe anything fancy or complicated.

We aimed for a simple system that achieves a great deal of reproducibility and modularity, but doesn’t require any DevOps skill. As such, it is amenable to the early months of a tech startup, when a handful of non-specialist engineers lay out the foundations.

Structuring the Workflows

Making a CI pipeline for a repository with a single Python package is usually quite simple. All steps are run from the root of a single Python package folder, so it is easy to install its dependencies in a single environment and run any command we want. In the case of a monorepo where there are many packages to process, we can’t share a single environment for all packages as their dependencies could conflict.

One could build a CI pipeline for each of the packages that needs to be checked, but all steps are not “equal”. Checking formatting for instance can run quickly on a large code base and does not require any of the packages’ dependencies, but checking imports or typing requires dependencies to be installed.

To that end, we distinguish two types of CI pipelines:

  • A global CI, that runs in the repository’s top-level folder.
  • Any instances of a local CI, each instance running in a package’s folder.

The Global CI

When a Pull Request triggers the CI, this global pipeline executes once for the whole repository. It completes quickly, giving fast feedback for the most common issues.

It needs the development dependencies from ./dev-requirements.txt installed (at the top-level), but none of the dependencies of the code it checks.

The global CI is declared in ./github/workflows/top_level.yaml.

---
name: Top-level CI
on:
  workflow_dispatch:
  pull_request:
  push:
    branches: main # Comment this line if you want to test the CI before opening a PR

jobs:
  ci-global:
    runs-on: ubuntu-22.04
    timeout-minutes: 10

    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: Install Python
        uses: actions/setup-python@v3
        timeout-minutes: 5
        with:
          python-version-file: .python-version
          cache: "pip"
          cache-dependency-path: |
            pip-requirements.txt
            dev-requirements.txt

      - name: Install Python dependencies
        run: |
          pip install -r pip-requirements.txt
          pip install -r dev-requirements.txt

      - name: Format Python imports
        run: |
          isort --check-only $(git ls-files "*.py")

      - name: Format Python
        run: |
          black --check $(git ls-files "*.py")

      - name: Lint Python
        run: |
          flake8 $(git ls-files "*.py")

      # Are all public symbols documented? (see pyproject.toml configuration)
      - name: Lint Python doc
        run: |
          pylint $(git ls-files "*.py")

A few notes about the pipeline above:

  1. We assume a fixed version of Python, which is stored in the top-level .python-version file. Some tools can pick it up, making it easier to create reproducible environments. It also helps to avoid hardcoding it in the pipeline file and duplicating it. This file is also consumed by pyenv which we recommend to manage the Python interpreter, instead of relying on a system install.

    Later on, this mechanism can be generalized to multiple Python versions using GitHub matrices, but we don’t delve into this level of detail in this post.

  2. We use an exact version of Ubuntu (ubuntu-22.04), instead of using a moving version like ubuntu-latest. This improves reproducibility and avoids surprises1 when GitHub updates the version pointed to by ubuntu-latest.

  3. We protect the entire pipeline from running too long by using timeout-minutes. On multiple occasions, we have seen pipelines get stuck and consume minutes until the default timeout is reached. The default timeout is 360 minutes! We’d rather set a timeout to avoid wasting money.

  4. Dependencies are cached, thanks to the cache and cache-dependency-path entries of the setup-python action. This makes the pipeline noticeably faster when dependencies are unchanged.

Finally, note that the global pipeline can be tested locally by developers, by using act as follows:

act -j ci-global

The Local CI

The local CI’s pipeline executes from a project or library folder. This pipeline executes in a context where the dependencies of the concerned project or library are installed. Because of this, it is usually slower than the global pipeline. To mitigate this, we will make sure this pipeline runs only when it is relevant to do so, as explained in the triggers section. When a Pull Request triggers the CI to run, the local pipeline executes zero or many times, depending on the list of files being changed. The multiple runs (if any) all start from different subfolders of the monorepo and don’t overlap.

Because the monorepo contains many Python packages that are all type-checked and tested similarly, we can follow the DRY2 principle and share the same definition of the CI pipelines.

This is possible in GitHub thanks to reusable workflows:

---
name: Reusable Python CI

on:
  workflow_call:
    inputs:
      working-directory:
        required: true
        type: string
      install-packages:
        description: "Space-separated list of packages to install using apt-get."
        default: ""
        type: string
      # To avoid being billed 360 minutes if a step does not terminate
      # (we've seen the setup-python step below do so!)
      ci-timeout:
        description: "The timeout of the ci job. The default is 25min"
        default: 25
        type: number

jobs:
  ci-local-py-template:
    runs-on: ubuntu-22.04
    timeout-minutes: ${{ inputs.ci-timeout }}

    defaults:
      run:
        working-directory: ${{ inputs.working-directory }}

    steps:
      - name: Checkout
        uses: actions/checkout@v3
        with:
          token: ${{ secrets.GITHUB_TOKEN }}

      - name: Setup Python
        uses: actions/setup-python@v4
        timeout-minutes: 5 # Fail fast to minimize billing if this step freezes (it happened!)
        with:
          python-version-file: ${{ github.workspace }}/.python-version
          cache: "pip"
          cache-dependency-path: |
            dev-requirements.txt
            pip-requirements.txt
            ${{ inputs.working-directory }}/requirements.txt

      - name: Install extra packages
        if: ${{ inputs.install-packages != ''}}
        run: |
          sudo apt-get install -y ${{ inputs.install-packages }}

      - name: Install dependencies
        run: |
          pip install -r ${{ github.workspace }}/pip-requirements.txt
          pip install -r ${{ github.workspace }}/dev-requirements.txt -r requirements.txt

      - name: Typechecking
        run: |
          pyright $(git ls-files "*.py")

      - name: Test
        run: |
          python3 -m pytest tests/  # Assume that tests are in folder "tests/"

This pipeline is used by all Python packages. This is possible because they share the same structure, outlined in the first post of this series:

  1. All packages share the same development dependencies (for linting, formatting and testing), defined in the top-level dev-requirements.txt file.
  2. All packages have their own dependencies in a requirements.txt file in their own folder.

Sometimes, a Python package in our monorepo may need specific system-wide dependencies, for instance CUDA or libffmpeg. The install-packages parameter allows the pipeline of a specific library to install additional system packages via apt.3

For now, we have defined a reusable workflow. But this is not yet enough: we need to actually run it! In the next section, we show how to trigger this pipeline for each library.

Triggers

In order to use the reusable workflow, we need to trigger it for every Python package that we want to check. However, we do not want to trigger the pipelines for all Python packages in the monorepo on every small change. Instead, we want to check the Python packages that are impacted by the changes of a Pull Request. This is important to make sure that the monorepo setup can scale as more and more Python packages are created.

GitHub has the perfect mechanism to do it thanks to the paths keyword that can be specified on the workflow trigger rules. With this specification, a workflow is triggered if and only if at least one file or directory changed in the Pull Request matches one of the expressions in the list of path expressions. If files and directories affected by a PR don’t match any of the expressions in paths, then the pipeline is not started at all.

As in the first post of this series, suppose the monorepo contains two libraries, named base and fancy:

...see the first post...
├── dev-requirements.txt
├── pip-requirements.txt
├── pyproject.toml
└─── libs/
    ├─── base/
    │     ├── README.md
    │     ├── pyproject.toml
    │     └── requirements.txt
    └─── fancy/
          ├── README.md
          ├── pyproject.toml
          └── requirements.txt

Then the pipeline for the fancy library is as follows:

---
name: CI libs/fancy

on:
  pull_request:
    paths:
      - "dev-requirements.txt"
      - "pip-requirements.txt"
      - ".github/workflows/ci_python_reusable.yml"
      - ".github/workflows/ci_fancy.yml"
      - "libs/base/**" # libs/fancy depends on libs/base
      - "libs/fancy/**"
  workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI

jobs:
  ci-libs-fancy:
    uses: ./.github/workflows/ci_python_reusable.yml
    with:
      working-directory: libs/fancy
    secrets: inherit

This pipeline runs if there are changes to any of the following:

  1. the top-level files dev-requirements.txt and pip-requirements.txt;
  2. the pipeline’s files;
  3. the code the library depends on, i.e. libs/base;
  4. the library’s own code, i.e., libs/fancy.

Just like the global pipeline, a local pipeline can be executed locally by a developer using act.

For example, the pipeline above is executed locally by running:

act -j ci-libs-fancy

And One Template to Rule Them All

We advise to use a template to automate the creation of a Python package in the monorepo for two main reasons:

  • To save time for developers, by providing a simple means to do a complex task. Not everyone is familiar with the ins and outs of a Python package’s scaffolding, yet this should not forbid anyone to create one.
  • To keep all Python packages consistent and principled. In the early days of an organization, where there are not many examples to copy/paste, things tend to grow organically. In this scenario, there is a high risk for incompatibilities to be introduced.

In addition, in the long run, having consistent libraries will help introduce global changes. This is important in the early days of a startup, as the technological choices and solutions are often changing.

We chose cookiecutter to make and use templates because it can be installed with pip and is known to most seasoned Python developers. It uses Jinja2 templating to render files, both for their file name and content.

Our template is structured as follows:

{{cookiecutter.module_name}}/
├── mycorp
│   └── {{cookiecutter.module_name}}
│       └── __init__.py
├── pyproject.toml
├── README.md
├── requirements.txt
├── setup.cfg
└── tests
    ├── conftest.py
    └── test_example.py

For example pyproject.toml is like so:

[tool.poetry]
name = "mycorp-{{ cookiecutter.package_name }}"
version = "0.0.1"
description = "{{ cookiecutter.package_description }}"
authors = [
    "{{ cookiecutter.owner_full_name }} <{{ cookiecutter.owner_email_address }}>"
]

The template also uses a hook to generate the new library’s instance of the CI, i.e., the second YAML file described in the Triggers section above. This is convenient as it was mostly boilerplate code that does not need to be adapted.

While this may seem like a minor thing, this template proved to foster the adoption of shared common practices, be it at the level of the Python configuration (pyproject.toml, requirements.txt) but also at the level of the CI (hey, it’s automatically adopted!).

We like to compare the adoption of shared practices to excellent UI design. Any UI designer would tell you it takes only a minor glitch or a minor pain point in a UI to deter users from adopting a new tool or application. We believe the same holds for best practices: making things easier for developers is critical for the adoption of a new tool or new workflow.

Possible Improvements

A few quality-of-life improvements are possible to maintain uniformity and augment automation:

  1. The CI can be augmented to check that all pyproject.toml files share the same values for some entries. For example the values in the [build-system] section should be the same in all pyproject.toml files:

    [build-system]
    requires = ["poetry-core>=1.0.0"]
    build-backend = "poetry.core.masonry.api"

    The same goes for the configuration of the formatter ([tool.black] section), the linter ([tool.pylint] section) and the type-checker ([tool.pyright] section).

    Such a checker can be implemented using the toml library.

  2. In a similar vein, the CI can be augmented to check the definitions of the local pipelines.

    This check could be avoided if we were using a tool dedicated to monorepo CIs, Pants for example, which has good Python support.4 This is one topic where our setup is a trade-off between simplicity (GitHub Actions only) and efficiency (some pipeline duplication happening, mitigated by additional checks).

  3. We recommend using a top-level CODEOWNERS file to automatically select reviewers on Pull Requests. A CODEOWNER file maps paths to GitHub handles. When a Pull Request is created and if appropriately configured, GitHub will go over the list of paths changed by the PR, and for each of these paths, find the matching GitHub handle. Then GitHub adds the handle to the list of reviewers of the Pull Request.

    In this manner, Pull Requests authors don’t have to select reviewers manually, saving them some time, as well as making clear who owns what in the codebase.

    One can additionally require approvals from owners to merge, but in a startup that is scaling up we don’t recommend it: requiring approval from owners can slow things down when people are unavailable, as the teams are usually too small to offer 365 days of availability on every part of the codebase.

  4. We recommend using a mergebot like Kodiak or Mergify. In our experience, mergebots are like chocolate: once you’ve tasted them, it’s really hard not to love them.

    In practice, they save valuable time for developers, by allowing them to leave the boring job of rebasing in case concurrent Pull Requests are competing to merge. If the codebase has good test coverage, they allow to assign a Pull Request to the bot and forget about it, knowing the bot will take care of ultimately merging it without compromising quality.

Conclusion

We have presented a continuous integration (CI) system for a monorepo that strikes a good balance between being easy to use and being featureful.

Notable features of this CI include a clear separation between fast repo-wide jobs and slower library-specific jobs. This CI is modular thanks to paths triggers and reusable workflows, and relatively fast thanks to caching. Finally, developers can jump in the monorepo and start new projects thanks to templates provided in the monorepo.

This CI is simple enough for the early months of a startup when CI specialists are not yet available. Nonetheless, it paves the way for a shared culture that fosters quality and, we believe, delivers faster.


  1. You still have something to do when using an exact version of a runner: you need to change it when GitHub deprecates it and ultimately removes it. As opposed to things breaking when a latest runner is updated, you get a notification from GitHub before it happens, giving you time to plan ahead. If you know it beforehand, it’s not a surprise anymore right?
  2. “Don’t Repeat Yourself”
  3. Generally speaking, apt is subpar for reproducibility. However, it does fit well for an early stage CI that wants to keep it simple. If reproducibility starts being an issue here, Nix could enter the picture.
  4. We don’t demonstrate the usage of Pants for this monorepo, because Pants works better with a shared Python environment (one single requirements.txt file at the top-level of the repository), whereas our client Kaiko had business incentives to work with an environment per Python package (as described in the first post of this series). If we had been in a global sandbox scenario, Pants would be the obvious choice for preparing this monorepo for scaling, once a CI specialist has been hired for example. Pants is our favorite pick here because our monorepo is pure Python and Pants’ Python support is excellent.

July 13, 2023 12:00 AM

July 11, 2023

Chris Smith 2

Approval and Score Voting are Intrinsically Tactical

My previous post was a large-scale comparison of approaches to voting based on modeling voters and simulating elections. I ran into a specific wrinkle there that I want to comment on from a more theoretical point of view. One question I set out to explore was how much voters benefit from tactical voting — that is, filling out their ballot by anticipating how others will vote and voting with that in mind, rather than merely giving a sincere expression of their own preferences. To do that, I wanted to compare sincere versus tactical voting as a strategy in each system.

I ran into a problem with this specific analysis. It’s ultimately impossible to give a definitive answer for what it means to vote sincerely in an approval or score-based voting system. These voting systems force voters to make tactical choices because they do not even permit ballots that simply reflect a voter’s preferences in a straightforward way. One must make tactical decisions in order to vote at all.

I’ll explore this from two perspectives: the challenge of threshold-setting in approval voting, and the challenge of choosing a scale of voter satisfaction in score voting. Ultimately, I’ll make the claim that these are different ways of expressing the same fundamental problem.

Approval voting requires tactical threshold-setting.

You are voting in an election between three candidates: Alice, Bob, and Casey. You’re a big fan of Alice, and would love to see her elected. Bob is terrible: he kicks puppies after letting them poop on your lawn and not cleaning it up. Casey is alright; you’re not excited by them, but it wouldn’t be a disaster to see them elected. You arrive at the ballot box, and see this:

Vote for as many as you like:
[ ] Alice
[ ] Bob
[ ] Casey

What do you do? Clearly, you vote for Alice, and you don’t vote for Bob (that sassa frassin’ dirty no good puppy-kicker). But what about Casey? If you don’t vote for Casey, and then Casey comes in second just barely behind Bob, you’ll regret the decision. If you do vote for Casey, and then Casey edges out Alice for the win, you’ll also regret the decision.

This is an example of a tactical voting problem. But there’s something else going on, too. In most situations, we can think about tactical voting by comparing tactical voting to sincere voting. In this case, though, which choice is more sincere? There’s simply no good answer. You can vote for Casey to differentiate them from Bob, or you can not vote for Casey to differentiate them from Alice, but the ballot doesn’t let you explain that you prefer Alice over Casey and prefer Casey over Bob, so you are forced to make a tactical decision: which of those preferences should you express, and which should you keep to yourself?

One could argue that voters “sincerely” approve or disapprove of certain candidates, and an approval ballot can sincerely express this. However, this oversimplifies how voters perceive candidates. Preferences are relative: it’s rare to find a candidate so perfect that a voter couldn’t prefer someone else, nor so bad that a voter couldn’t imagine anyone worse. Factoring a voter’s overall level of approval —their general optimism or pessimism about politicians, for instance — into the effectiveness of their vote would be an affront to democratic principles. Everyone’s vote should count equally, regardless of their general attitude toward politics. A ballot is an expression of relative preference, not overall sentiment. In the context of approval voting, therefore, there is no objectively sincere way to decide which candidates should receive a voter’s approval.

Limited precision score voting requires tactical threshold-setting.

The same argument applies to score-based voting systems, to the extent that they offer limited precision to score candidates. We might try to fix the example above by offering an intermediate option: 1-star ratings mean you can’t stand this candidate, 2 stars means they are alright, and 3 stars means you love them. But now enter Donna, a fourth candidate who feels a bit scummy, and you’d prefer Casey to Donna, but Donna is still far better than Bob. Now you’re back in the same dilemma: you cannot merely express your preferences without making a tactical decision.

Score voting requires a tactical preference scale.

As stated above, it might seem that intrinsic tactical voting only matters when there are fewer rating choices than candidates. This isn’t the case, though. Even with essentially unlimited precision, voting systems that average voters’ scores are still inherently tactical.

Your local election is being held again, with Alice, Bob, and Casey all running for the second time. (Donna decided not to run again.) This time, the ballot reads as follows:

Rank each candidate from 1 to 100.
[ ___ ] Alice
[ ___ ] Bob
[ ___ ] Casey

You love Alice, and you’re happy to assign her a rating of 100. Bob is terrible, and clearly gets a 1. But is Casey a 25? 50? 75? The election will be decided by averaging the scores for each candidate, so if you rate Casey too high, they might edge out Alice for the win, but too low and they might be edged out by Bob.

It’s a little less obvious here that the decision of how to rate Casey is inherently tactical. Nevertheless, I’d argue that it is an inherently tactical decision, because the scale on which to rank candidates is not well-defined.

An aside about pitch

Because “satisfaction” or “happiness” are such nebulous terms, it’s easier to explain what I mean in terms that are more concrete. Let’s talk about the pitch of a musical note, which is also all about perception, but gives us a precise selection of units of measure to investigate.

Musical notes labeled by letter and octave
  • To a musician, at least in the modern western world, pitch of musical notes is often measured in steps. Each consecutive key on a piano keyboard (including the black keys) is a half-step of difference in pitch. The distance from A2 to A4, for instance, is 24 keys, or 12 steps.
  • In physics, pitch is represented by frequency, and measured in Hertz: the number of oscillations per second of the sound wave that’s produced. A2 oscillates 110 times per second, while A4 oscillates 440 times per second. That’s a difference of 330 Hertz.
  • Let’s consider the note C4 (also called middle C). It’s 15 keys, or 7.5 steps, above A2. It oscillates about 262 times per second, which is 152 Hertz above A2.

Suppose a musician and a physicist are asked to rate the relative pitch of A2, C4, and A4 on a scale from 1 to 100. They both assign A2 a score of 1 because it’s the lowest pitch, and A4 a score of 100 because it’s the highest pitch. But how do they score the C4? The musician might look at the number of steps of difference: 7.5 out of 12, which is a score of about 63. The physicist might look at the frequency difference: 152 out of 330, which is a score of about 47.

Why do they reach different results? They are considering pitch on different scales with different rates of growth. These aren’t the rather boring differences we see in distance, either: whether you’re measuring in inches, centimeters, or light-years, twice as far is still twice as far. But frequency grows exponentially relative to steps, so it increases much faster in higher octaves. Conversely, steps grow logarithmically with frequency, so they increase much faster at lower frequencies and then slow down. Crucially, neither of these measurements is the right one all the time; it’s a matter of choosing a perspective and carefully defining what you’re measuring.

But what about voters and elections?

That kind of scaling issue, where there are different scales that change at different rates, is very common when we deal with perception and subjective experience, whether it’s the pitch of a sound, the brightness of a light… or, far more so, experiences of happiness or pain or satisfaction. These experiences don’t live on one definitive scale where we can compare relative distances or take averages. Rather, the scale itself is a matter of perspective, and the more subjective the experience, the harder it is to define that perspective.

So how do you rate Casey on your scored ballot? Maybe you pick a logarithmic scale, analogous to musical steps, and Casey receives a score of 63. Or maybe you pick an exponential scale, similar to frequency, and Casey receives a 47. Neither of these are fundamentally sincere or insincere ways to vote, because the ballot didn’t tell us which scale to measure on. They are simply choosing a point of view about what satisfaction means and what scale it’s best measured on.

But they do have tactical consequences: choosing the logarithmic scale that rates Casey as a 63 means using your ballot more to stop Bob from winning, while accepting that you’re doing less to help Alice beat Casey. Conversely, choosing the exponential scale that rates Casey as a 47 means using your ballot more to help Alice, and accepting that you’re doing less to help Casey beat Bob if Alice isn’t the winner.

Once again, you’re being forced to make a choice that has no sincere answer, but definitely has tactical implications. The tactics are intrinsic to the voting system.

Tactical thresholds and scales are the same thing.

These might initially seem like two very different phenomena, but I’d argue they are two manifestations of the same thing. In the first election, when you were asked to make a choice whether to approve of Casey (grouping them with Alice) or disapprove (grouping them with Bob), one way to look at this is that you were asked whether Casey is more similar to Bob or Alice in terms of how satisfied you’d be with their election.

Notice that if you adopt the logarithmic scale, where Casey scores a 63, you’re likely to consider the most sincere answer to be grouping Casey with Alice, and therefore giving them your approval. On the other hand, if you adopt the exponential scale and rate Casey a 47, you’re likely to have a tough choice, but ultimately conclude it’s more sincere to group them with Bob and not give them your approval.

In this way, the threshold-setting problem is just a consequence of the scale-setting problem. Any threshold you choose effectively defines a scale where that threshold is the midpoint between the two extremes. The precision still matters, but only in the sense that rounding error further exaggerates the difference between the scales. That is its own separate weakness, but the fact that voting is intrinsically tactical ultimately comes from the scale-setting problem in both cases.

This has consequences.

This originally came up, for me, because it made it difficult to say what it means to compare approval, range, and STAR voting systems in my simulations with sincere ballots. These voting systems do very well on many measures, such as maximizing satisfaction of voters and making decisions consistent with democratic principles like majority rule. However, when it comes to the goal of minimizing tactical voting, there’s a problem because non-tactical ballots simply do not exist. I attempted to approximate a “sincere” ballot by making these tactical choices arbitrarily, but this was rightly criticized as sub-optimal in many cases.

But outside the challenges of implementing my simulations, it has consequences for real elections, as well. An important goal in comparing election systems is to minimize the significance of tactical voting, since not all voters are equipped to vote tactically. But what does that mean when there’s no such thing as a non-tactical vote? For the same reason that I struggled to perform my analysis, voters who haven’t followed election polls and strategy closely may struggle to know how to vote at all.

With approval and score-based voting, voters are asked to cast ballots in a way that inherently involves tactical decisions, leaving no escape valve for sincere expression. This honestly can feel more like playing a complex board game than seriously assessing voter preferences. What implications might this have for voters’ decisions on whether to vote, or their confidence in the legitimacy of election results? I don’t have those answers, but they are questions worth considering.

by Chris Smith at July 11, 2023 11:37 PM

July 10, 2023

Chris Smith 2

Simulating Elections with Spatial Voter Models

Democracy: a concept almost universally revered, underpinned by the foundational act of voting. However, interpreting voting results to make fair and representative decisions is anything but straightforward. While it’s tempting to think the option with the most votes should just triumph, reality proves more complex. Our elections are held together with a plethora of details — primaries, runoffs, ranked ballots, and more — that work together to produce reasonable outcomes.

But are these mechanisms functioning as intended? How effectively do they work, and which ones outperform others? What unintended consequences might they bear? Answering these vital questions is critical to the ongoing project of refining our democracies. Yet, answers to these questions often rely on anecdotes, oversimplification, and broad assumptions. In this article, I’ll guide you through a more robust approach to these questions — one that relies on computational modeling and simulation. (Here’s the code for the simulation.) Along the way, we’ll uncover some eye-opening consequences of our chosen election methodologies.

However, the main challenge in modeling any social phenomenon is translating the complex behaviors of humans into mathematical constructs. A model that misrepresents voters’ behaviors isn’t merely useless — it can actively mislead. We’ll encounter instances of this pitfall along our journey. Thankfully, when it comes to voter behavior, it’s possible to utilize a relatively simple, visual, and intuitive approach: spatial voter modeling.

So, join me as we embark on this investigation, exploring some critical questions surrounding voting and democracy through the lens of computational simulation with spatial voter models.

What’s so hard about voting, anyway?

Democracy is often described as “majority rule,” but what happens when there are multiple choices, and none of them can claim majority support?

Plurality voting (misleadingly termed “first past the post” or FPTP), where the candidate with the most votes wins, works well for choices between two options. When there are more than two choices, though, it has unfortunate consequences:

  • Vote splitting: voters split their choices between many similar options, so no single candidate receives enough votes to stand out.
  • The spoiler effect: a special case of vote splitting where even a hopeless candidate changes the result by attracting enough votes to cause a different candidate to lose.

Common wisdom suggests that the United States and other modern democracies conduct most elections with simple plurality voting. Common wisdom, though, is commonly wrong! In fact, plurality voting is very rarely used on its own. Its flaws are hard to ignore, so it’s instead propped up by a labyrinth of mechanisms and voter behaviors to compensate for them. Partisan primaries are used to coalesce political party support around a single choice to minimize vote splitting. Voters are persistently reminded not to “throw their vote away” on spoilers.

These are examples of tactical voting: voting in a way that doesn’t honestly express your preferences in order to produce an advantage for your preferred outcomes. Tactical voting means the outcome isn’t determined by majority support, but rather by cleverness and manipulation.

Alternatives to plurality voting

Given the challenges posed by plurality voting, numerous alternative systems have been proposed that aim to choose a more representative outcome. Instead of working around vote-splitting and the spoiler effect, they try a frontal assault on the whole phenomenon by changing the rules of the election itself. These often employ ranked or rated ballots, which ask voters to either rank candidates in order of preference or score candidates on a scale. By collecting more detailed information on voter preferences, they can make a choice that better reflects those preferences.

Choosing a winner from these ballots, though, is more complex than single votes, and many systems have been proposed. Instant runoff voting, or IRV, chooses a winner by eliminating candidates one at a time. It has been used in elections across the United States in Maine, Vermont, New York, Virginia, Utah, Alaska, and more. Other alternatives that are commonly advocated are range voting, STAR voting, Borda count, and many more.

In more theoretical circles, Condorcet voting is understood by most experts as the gold standard in social decision making, and it tries to choose a Condorcet winner: the candidate who would have won any head-to-head election against any other candidate. Selecting the Condorcet winner in a ranked ballot election offers huge advantages: it is the unique choice that best aligns with the principle of majority rule and expresses the unambiguous preference of voters. Crucially, though, a Condorcet winner isn’t guaranteed to exist at all. There are many Condorcet-consistent voting systems that choose a Condorcet winner if they exist or someone else if not!

It’s easy to get lost in the world of voting systems. But before we get there, I want to focus less on the specific election systems for a bit, and more on an opportunity: if we could observe any of them in action, we could dig in and see with our own eyes what the practical consequences are. To do that, we’ll want to pose some key questions that can be answered through computational modeling and simulation.

Key questions

Choosing an election system, as we’ve seen, is a delicate balance. Practical considerations and theoretical ideals are both important, but there’s a third leg to the tripod: empirical evidence. To fully understand the implications of these systems, we can conduct a numerical investigation of simulated elections.

Our exploration will center around these key questions:

  1. Although these proposed alternative election systems can yield different outcomes for the same voter preferences, how often do these differences actually occur? When they do, how can we describe the nature of the disagreement?
  2. Tactical voting — the need for it, and the effect it has when it occurs — is a critical concern in building fair voting systems. What forms does tactical voting take, and what is its practical effect when it happens?
  3. The chief benefits of Condorcet voting hinge upon the presence of a Condorcet winner. But how frequently do they appear, and in what types of elections might they fail to emerge? How impactful is this concern for real-world election scenarios?
  4. The goal of any social process should be to produce happy people. Aside from all the tactical concerns, can we say anything about which election systems actually make people happier?

Ideally, we’d delve into these questions using a vast store of real-world election data. However, such data is notoriously scarce. Uses of instant runoff voting (IRV) are on the rise, but remain somewhat uncommon. Moreover, when IRV elections do occur, comprehensive data regarding ballot distributions is not always disclosed. Real world instances of Condorcet elections and other voting systems are even more rare.

One solution lies in computational modeling and simulation, allowing us to generate an unlimited array of hypothetical election scenarios and scrutinize how various voting systems react. Though it comes with its own challenges, this methodology equips us to address the questions above by examining a multitude of different scenarios, offering valuable insights into the dynamics of voting systems.

Understanding voter behavior from simple to sophisticated models

Modeling voter behavior is critical in simulating elections, but this task is not without its challenges. As human beings, our reasons for making choices are sometimes inscrutable. Nevertheless, we can construct models to approximate the behavior of voters. It’s crucial to create effective and realistic models; otherwise, our simulations could end up unhelpful or even misleading.

Simple models: Impartial Culture (IC) and variants

The most basic model we could consider is the Impartial Culture (IC) model. In IC, each voter randomly ranks the candidates. Despite its simplicity, the IC model is generally considered unrealistic. Voters simply do not make entirely independent and unbiased decisions in this way.

More nuanced variations such as the Impartial Anonymous Culture (IAC) and even Impartial Anonymous Neutral Culture (IANC) were developed to address this flaw. These models very gently introduce dependence between the behavior of different voters. However, they continue to generate essentially equally likely outcomes.

Based on these models, the likelihood of a clear winner is low: a Condorcet winner is predicted nearly 0% of the time by IC and around 37% for IAC when the number of candidates is significant . Furthermore, they predict little agreement between different voting systems. However, we must remember that these models are based on assumptions of neutral and random voter behavior, which do not reflect actual voting patterns.

Models like IC and IAC are better suited for exploring worst-case scenarios than predicting realistic outcomes. To explore more probable outcomes, we need to adopt a different voter model.

Spatial voter models

Real world voters base their choices on factors such as political ideology, personal values, and group identification, leading to clearly discernable patterns in voting behavior. This is where spatial voter models come into play, as they can reflect these real world patterns with a bit of tuning and care.

Voters arranged in a vector space

Spatial voter models place voters and candidates in a multi-dimensional “space” of political opinion. Instead of location, the dimensions in this space are stances on various issues or alignment with political philosophies: for instance, the left-right political divide can be seen as a one-dimensional space, with left-leaning, right-leaning, and moderate voters and candidates each having their place on a continuum.

A one-dimensional spatial model used to explain election results

A two-dimensional example is found in the Nolan chart, an advocacy tool used widely by the Libertarian party. One axis represents a person’s stance on “personal freedom”, and the other on “economic freedom”.

The Nolan Chart: a two-dimensional spatial model used for advocacy

Because voters and candidates are located in a geometric space, we can consider the distance and direction between them. The premise of spatial models is that voters are more likely to choose the candidates who are closest to them to them in this “space”, indicating agreement on fundamental positions.

Spatial model #1: Fixed variance uniform model

The most straightforward spatial model we can use is a fixed variance model with a uniform distribution of voters. In this model, each dimension varies by the same amount, and every candidate or voter has an equal likelihood of being at any point within the political space. This space forms a square (in two dimensions) or cube (in three dimensions) with voters dispersed uniformly throughout.

A fixed-variance uniform 3D spatial voter model: each dot is a voter.

These models are inherently abstract. The importance lies not in labeling the axes according to specific issues but in the existence of variation in multiple directions. While low-dimensional models communicate more effectively, higher dimensions capture a more nuanced picture of reality. Reality is almost always infinite-dimensional, but we can only perceive finite-dimensional projections.

To illustrate, consider a model with 100 dimensions, representing a variety of issues, with 10 candidates and 100,000 voters. I’ve simulated this, and the outcome differs significantly from earlier models.

  • Different voting systems demonstrate a high level of agreement; the same candidate often wins regardless of the voting system.
  • A Condorcet winner appears every time.

However, these findings are unrealistic. Plurality voting is known to be unreliable with many candidates, necessitating ranked ballots, primaries, and other mechanisms. To explain that, we’ll need a more realistic model.

Spatial model #2: Decaying variance and Zipf’s law

It turns out that the very aspect we introduced for greater realism — the 100-dimensional space — is the cause of the problem. By reducing the number of dimensions, we see results more in line with expectations.

However, using too few dimensions loses the expressive power of the spatial model, leaving out certain voter preferences. I covered this in an earlier post, Geometry, Dimensions, and Elections. With the following three candidates and a one-dimensional spatial model, the model fails to account for voters who prefer both candidates A and C over B.

Where are A>C>B and C>A>B?

By adding add a second dimension, we permit the model to understand these preferences.

There they are!

We’re left with a conundrum: while too few dimensions can fail to explain certain voter preferences, too many dimensions can result in unrealistic outcomes due to a lack of constraints and loss of spatial coherence. The solution is to use a high-dimensional space, but constrain the variance of the additional dimensions. In reality, not all axes of political ideology carry equal weight in determining voter decisions.

The fixed variance model doesn’t account for variable importance of dimensions, but it’s not an unusual phenomenon. With data in a high-dimensional vector space, it’s common for a few particularly significant directions of variation to account for a substantial portion of the overall variance. This principle underlies key concepts in data science, such as Principal Component Analysis.

Here, it can be captured by applying Zipf’s law, a statistical rule of thumb that suggests a pattern of diminishing returns in measurements: the second greatest measurement might be about half of the largest, then a third, and so on proportional to 1/n. Zipf’s law has been found to apply to distribution of words in a language, city populations, even the income distribution within countries. By structuring our model in line with Zipf’s law, we can ensure that the most important political issues have the most impact on voter decisions.

A multiple variance (still uniform) 3D spatial voter model

This model allows the incorporation of a high number of dimensions without losing spatial coherence. With this revised model, we see the differences between voting systems become evident. We’re still using a 100-dimensional model with 10 candidates and 100,000 voters, but this time applying Zipf’s law to the variances:

  • Condorcet and range voting results differ from plurality results 85% of the time.
  • Condorcet and range voting differ from instant runoff (IRV) about 60% of the time.
  • IRV differs from plurality about 50% of the time.

These results highlight critical role of carefully adjusting our models to ensure they represent reality accurately… but our journey toward better models isn’t over yet.

Spatial model #3: The Mixture of Zipf Gaussians

This model counters the uniformity assumption. Real voters aren’t spread evenly across the political spectrum like buttered bread. Instead, they cluster according to location, personal values, cultural influences, and social groups.

These clusters can be modeled using a Gaussian, or normal distribution, where the mean represents the cluster’s center, and voters are centered around this point.

A Gaussian cluster of voters

However, a single Gaussian oversimplifies by implying most voters share similar or identical views. A more accurate representation involves multiple clusters, each with its own opinions. This yields a mixture of Gaussians (MoG) model, which represents voters forming communities of interest — like political parties, religious groups, or neighborhoods — and reaching consensus within these communities rather than beyond them.

Building realistic models required carefully tuning some details:

  • The number of clusters: Selecting 10 allows overlap and interaction between groups, but going beyond 10 introduces too much random noise, obscuring the distinct clusters.
  • The size, location, and spread of each cluster: Groups vary in homogeneity and range, and almost always have some overlap with neighboring clusters.
  • The orientation of each cluster: I slightly reshuffled the original order of dimensions before applying Zipf’s scaling law to each subpopulation, balancing the overall population dynamics with each group’s unique focus on certain issues.

The result can be termed a Mixture of Zipf Gaussian (MoZG) framework. Each specific model generated from this framework is defined by random placements and parameterizations of clusters. To look at the some of these models, I’ve sampled voters (blue dots) and candidates (goldenrod boxes), projected them from the full 100 dimensions down to two, and plotted them. You can see a couple examples here.

I’m quite fond of the generated models. They seem to capture the structure, texture, and unique character of organically developed voting populations. Voters and candidates have distinct and multi-dimensional positions, form overlapping factions with differing priorities, and even these factions have outliers. You could weave a convincing narrative about the motivations and history of each hypothetical population. These models are now ready to provide insights into the structure and dynamics of election systems within a complex and realistic setting.

Simulation and Analysis

Now that we’ve established a model for voters and preferences, let’s use it to scrutinize voting systems and glean insights into their real-world behavior. For this experiment, I’ve run numerous simulated elections using MoZG voter models described earlier, with the help of open source code I’ve made available at https://github.com/cdsmith/spatial-voting. We’ll be examining results from a thousand elections using 100 dimensions, 10 candidates, and 1,000 voters. These results are representative of a fairly wide range of parameters, though. Let’s dive in!

When and how do election results disagree?

For elections with more than two candidates, the best method for choosing a winner is still an open question. With these model voting populations at our disposal, we can take a bite at the question by asking: how often do these systems disagree, anyway? When they do, what is the nature of the disagreement?

To find out, I’ve identified winners for these thousand elections using the following methods:

  • Plurality. Each voter casts a vote for the one candidate they like best.
  • Instant runoff (IRV). Voters rank candidates by preference. The candidate with the fewest first place rankings us eliminated. Repeat until there’s a winner.
  • Condorcet. Choose candidates who beat everyone else in head-to-head elections.
  • Range voting. Each voter rates every candidate on a scale from 0 to 100, and the best average score wins.
  • Approval voting. Each voter selects as many candidates as they like, and the candidate with the highest percent wins. For the initial simulation, voters are assumed to approve of any candidates nearer than average to their positions.
  • STAR voting. Like range voting, but with a head-to-head runoff between the top two scoring candidates to determine the winner.
  • Borda Count. Each voter ranks candidates by preference, and the candidate with highest “Borda count” (which is equivalent to the lowest average rank) wins.

The following table illustrated the frequency of agreement between these voting met