Planet Haskell

March 22, 2018

Functional Jobs

Software Engineer (Haskell - Full Stack - Singapore - On Site) at Capital Match (Full-time)

Position (Job Title): Software Engineer (Haskell - Full Stack - Singapore - On Site)

About us: Capital Match is an innovative fintech business providing a range of financial solutions for corporate and individual clients, investors and borrowers. We are the largest marketplace invoice financing platform in Southeast Asia, based in Singapore, having processed more than US$50 million in funding over the past 2 years. We are supported by multiple VCs and institutional investors including B Capital and Dymon Asia Capital, an alternative asset manager with $6bn AUM.

Job Description: We are looking for experienced developers to lead our tech growth in the fintech space, expand into surrounding countries, and develop new products and integrations. You will be involved in all aspects of the creation, growth and operations of a secure web-based platform. Experience in front-to-back feature development, distributed deployment and automation in the cloud, build and test automation, is highly desirable.

Job Requirements:

● 5 years (or more) of software engineering experience

● Software engineering experience with Haskell

● Strong foundation in data structures, algorithms, and software design

● Experience of developing both server and web applications


● Experience with modern software engineering practices such as TDD, CI, Emergent Design, Refactoring, Peer Review and Continuous Improvement

● Familiarity with Linux systems, command-line environments and cloud-based deployments

● A background in fintech and especially the lending space would be an advantage, but is not essential

Our tech stack: Our platform is primarily developed in Haskell with a ClojureScript frontend. We use Docker containers and standard cloud infrastructure systems to manage our production rollouts. We make heavy use of modern functional programming practices, such as property-based testing and type-driven programming, to improve the quality of our code. We use many popular Haskell libraries such as Lens, QuickCheck, Servant, Scotty, Aeson, Attoparsec, Criterion, Cryptonite, and Pandoc. We also use Template Haskell to minimize boilerplate and repetition within our code. We use continuous testing, integration and deployment wherever possible.


● Competitive salary

● Equity

● Visa sponsorship for full-time position in Singapore

Contact: Email us at HR [at] capital-match [dot] com with subject "Devs MARCH 2018"

Get information on how to apply for this position.

March 22, 2018 08:42 AM

March 21, 2018

Mark Jason Dominus

The 1943 Bengal famine

A couple of years ago I was reading Wikipedia's article about the the 1943 Bengal famine, and I was startled by the following claim:

"If food is so scarce, why hasn’t Gandhi died yet?"

Winston Churchill's response to an urgent request to release food stocks for India.

It was cited, but also marked with the “not in citation” tag, which is supposed to mean that someone checked the reference and found that it did not actually support the claim.

It sounded like it might be the sort of scurrilous lie that is widely repeated but not actually supportable, so I went to follow it up. It turned out that although the quotation was not quite exact, it was not misleadingly altered, and not a scurrilous lie at all. The attributed source (Tharoor, Shashi "The Ugly Briton". Time, (29 November 2010).) claimed:

Churchill's only response to a telegram from the government in Delhi about people perishing in the famine was to ask why Gandhi hadn't died yet.

I removed the “not in citation” tag, which I felt was very misleading.

Still, I felt that anything this shocking should be as well-supported as possible. It cited Tharoor, but Tharoor could have been mistaken. So I put in some effort and dug up the original source. It is from the journal entry of Archibald Wavell, then Viceroy of India, of 5 July 1944:

Winston sent me a peevish telegram to ask why Gandhi hadn't died yet! He has never answered my telegram about food.

This appears in the published version of Lord Wavell's journals. (Wavell, Archibald Percival. Wavell: The Viceroy's journal, p. 78. Moon, Penderel, ed. Oxford University Press, 1973.) This is the most reliable testimony one could hope for. The 1973 edition is available from the Internet Archive.

A few months later, the entire article was massively overhauled by a group of anglophiles and Churchill-rehabilitators. Having failed to remove the quotation for being uncited, and then having failed to mendaciously discredit the cited source, they removed the quotation in a typical episode of Wikipedia chicanery. In a 5,000-word article, one sentence quoting the views of the then-current British Prime Minister was deemed “undue weight”, and a failure to “fairly represent all significant viewpoints that have been published by reliable sources”.

Further reading: In Winston Churchill, Hollywood rewards a mass murderer. (Tharoor again, in last week's Washington Post.)

by Mark Dominus ( at March 21, 2018 01:20 PM

March 20, 2018

Mark Jason Dominus

English's -en suffix

In English we can sometimes turn an adjective into a verb by suffixing “-en”. For example:

black → blacken
red → redden
white → whiten
wide → widen

But not

blue → bluen*
green → greenen*
yellow → yellowen*
long → longen*

(Note that I am only looking at -en verbs that are adjective-derived present tenses. This post is not concerned with the many -en verbs that are past participles, such as “smitten” (past participle of “smite”), “spoken” (“speak”), “molten” (“melt”), “sodden” (“seethe”), etc.)

I asked some linguist about this once and they were sure it was purely morphological, something like: black, red, and white end in stop consonants, and blue, green, and yellow don't.

Well, let's see:

Stop Blacken
Open (?)
Fricative Coarsen
Nasal   Cleanen
Vowel   Angrien
Glide   Betteren

There are some fine points:

  • “Biggen” used to exist but has fallen out of use
  • Perhaps I should have ommitted “strengthen” and “hasten”, which are derived from nouns, not from adjectives
  • I'm not sure whether “closen”, “hotten” and “wetten” are good or bad so I left them off
  • “moisten” and “soften” might belong with the stops instead of the fricatives
  • etc.

but clearly the morphological explanation wins. I'm convinced.

[ Addendum: Wiktionary discusses this suffix, distinguishing it from the etymologically distinct participial “-en”, and says “it is not currently very productive in forming new words, being mostly restricted to monosyllabic bases which end in an obstruent”. ]

by Mark Dominus ( at March 20, 2018 12:26 AM

March 19, 2018

Mark Jason Dominus

Funky coordinate systems

I had a fun idea this morning. As a kid I was really interested in polar coordinates and kind of disappointed that there didn't seem to be any other coordinate systems to tinker with. But this morning I realized there were a lot.

Let be some parametrized family of curves that partition the plane, or almost all of the plane, say except for a finite number of exceptions. If you have two such families and , and if each curve in intersects each curve in in exactly one point (again with maybe a few exceptions) then you have a coordinate system: almost every point lies on and for some unique choice of , and these are its coordinates in the system.

For example, when is the family of lines and is the family of lines then you get ordinary Cartesian coordinates, and when is the family of circles and is the family (plus also ) you get standard polar coordinates, which don't quite work because the origin is in every member of , but it's the only weird exception.

But there are many other families that work. To take a particularly simple example you can pick some constant and then take

$$\begin{align} F_1(c): && x & =c \\ F_2(c): && y & =kx+c. \end{align} $$

This is like Cartesian coordinates except the axes are skewed. I did know about this when I was a kid but I considered it not sufficiently interesting.

For a more interesting example, try

$$\begin{align} F_1(c): && x^2-y^2 & =c \\ F_2(c): && xy & =c \end{align} $$

which looks like this:

<figure>Two families of hyperbolas, as in the displayed equation immediately preceding. The hyperbolas (in blue) and (in red)</figure>

I've seen that illustration before but I don't think I thought of using it as a coordinate system. Well, okay, every pair of hyperbolas intersects in two points, not one. So it's a parametrization of the boundary of real projective space or something, fine. Still fun!

In the very nice cases (such as the hyperbolas) each pair of curves is orthogonal at their point of intersection, but that's not a requirement, as with the skew Cartesian system. I'm pretty sure that if you have one family you can construct a dual family that is orthogonal to it everywhere by letting be the paths of gradient descent or something. I'm not sure what the orthogonality is going to be important for but I bet it's sometimes useful.

You can also mix and match families, so for example take:

$$\begin{align} F_1(c): && x & =c \\ F_2(c): && xy & =c \end{align} $$

Some examples work better than others. The hyperbolas are kind of a mess when , and they don't go together with the circles in the right way at all: each circle intersects each hyperbola in four points. But it occurs to me that as with the projective plane thingy, we don't have to let that be a problem. Take to be the quotient space of the plane where two points are identified if their -coordinates are the same and then investigate . Or maybe go more directly and take (literally the Cartesian product), and then topologize in some reasonably natural way. Maybe just give it the product topology. I dunno, I have to think about it.

(I was a bit worried about how to draw the hyperbola picture, but I tried Google Image search for “families of orthogonal hyperbolas”, and got just what I needed. Truly, we live in an age of marvels!)

by Mark Dominus ( at March 19, 2018 04:56 PM

Functional Jobs

Scala developer for NetLogo at Northwestern University (Full-time)

Job Summary

The CCL Lab at Northwestern University is looking for a full-time Scala/Java Software Developer to work on the NetLogo desktop application, a celebrated modeling environment used in both education and research. The developer we seek will also participate in the design of web based modeling applications in Javascript.

This Software Developer position is based at Northwestern University's Center for Connected Learning and Computer-Based Modeling (CCL), working in a small collaborative development team in a university research group that also includes professors, postdocs, graduate students, and undergraduates, supporting the needs of multiple research projects. A major focus would be on development of NetLogo, an open-source modeling environment for both education and scientific research. CCL grants also involve development work on HubNet and other associated tools for NetLogo, including research and educational NSF grants involving building NetLogo-based science curricula for schools.

NetLogo is a programming language and agent-based modeling environment. The NetLogo language is a dialect of Logo/Lisp specialized for building agent-based simulations of natural and social phenomena. NetLogo has many tens of thousands of users ranging from grade school students to advanced researchers. A collaborative extension of NetLogo, called HubNet, enables groups of participants to run participatory simulation activities in classrooms and distributed participatory simulations in social science research. NetLogo can also be extended to interface with external software such as R, Mathematica and Python.

Application information:

The Northwestern campus is in Evanston, Illinois on the Lake Michigan shore, adjacent to Chicago and easily reachable by public transportation.

To apply, please contact Uri Wilensky, Director of the CCL, at the following email address: ccl-developer-job [at] ccl [dot] northwestern [dot] edu

Please put 'Developer Job' in the subject line. Please include a PDF version of your resume and, optionally, links to any relevant development projects you feel illustrate your abilities.

Specific Responsibilities:

  • Collaborates with the NetLogo development team in designing features for NetLogo, HubNet, NetLogo extensions, and web-based versions of these applications; writes code independently, and in the context of a team of experienced software engineers and principal investigator;
  • Creates, updates and documents existing models using NetLogo, HubNet, NetLogo extensions, and web-based applications; creates new such models;
  • Interacts with commercial and academic partners to help determine design and functional requirements for NetLogo, HubNet, and NetLogo extension features or projects;
  • Interacts with user community including responding to bug reports, questions, and suggestions, and interacting with open-source contributors;
  • Mentor undergraduate student workers and guide Google Summer of Code participants on contributions to the NetLogo codebase; assist graduate students with issues encountered during their work;
  • Performs data collection, organization, and summarization for projects; assists with coordination of team activities;
  • Performs other duties as required or assigned.

Minimum Qualifications:

  • A bachelor's degree in computer science or a closely related field or the equivalent combination of education, training and experience from which comparable skills and abilities may be acquired;
  • Demonstrated experience and enthusiasm for writing clean, modular, well-tested code.

Preferred Qualifications:

  • Experience with working effectively as part of a small software development team, including close collaboration, distributed version control, and automated testing;
  • Experience with at least one JVM language, Scala strongly preferred;
  • Experience developing GUI applications, especially Java Swing-based applications;
  • Experience with programming language design and implementation, functional programming (especially Haskell or Lisp), and compilers;
  • Interest in and experience with computer-based modeling and simulation, especially agent-based simulation;
  • Interest in and experience with distributed, multiplayer, networked systems like HubNet;
  • Experience working on research projects in an academic environment;
  • Experience with open-source software development and supporting the growth of an open-source community;
  • Experience with Linux/Unix system administration;
  • Experience with building web-based applications, both server-side and client-side components, particularly with html5 and JavaScript and/or CoffeeScript;
  • Interest in education and an understanding of secondary school math and science content.
  • List item

As per Northwestern University policy, this position requires a criminal background check. Successful applicants will need to submit to a criminal background check prior to employment.

Northwestern University is an Equal Opportunity, Affirmative Action Employer of all protected classes including veterans and individuals with disabilities.

Get information on how to apply for this position.

March 19, 2018 04:37 PM

Ken T Takusagawa

[mjdtvttx] Flipping function composition

Here is how to refer to the function composition operator in Haskell when it has a qualified prefix: "Prelude..".  It certainly looks very weird.  Here it is in the context of creating an operator which puts its arguments in an order which some might consider more natural.

import Prelude hiding ((.));
import qualified Prelude;
(.) :: (a -> b) -> (b -> c) -> a -> c;
(.) = flip (Prelude..);

But consider just using the builtin operator Control.Category.>>> instead of redefining "." in this way.

by Ken ( at March 19, 2018 03:40 PM

March 17, 2018


Fake: Generating Realistic Test Data in Haskell

On a number of occasions over the years I've found myself wanting to generate realistic looking values for Haskell data structures.  Perhaps I'm writing a UI and want to fill it in with example data during development so I can see how the UI behaves with large lists.  In this situation you don't want to generate a bunch of completely random unicode characters.  You want things that look plausible so you can see how it will likely look to the user with realistic word wrapping, etc.  Later, when you build the backend you actually want to populate the database with this data.  Passing around DB dumps to other members of the team so they can test is a pain, so you want this stuff to be auto-generated.  This saves time for your QA people because if you didn't have it, they'd have to manually create it.  Even later you get to performance testing and you find yourself wanting to generate several orders of magnitude more data so you can load test the database, but you still want to use the same distribution so it continues to look reasonable in the UI and you can test UI performance at even bigger scale.

Almost every time I've been in this situation I thought about using QuickCheck's Arbitrary type class.  But that never seemed quite right to me for a couple reasons.  First, Arbitrary requires that you specify functions for shrinking a value to simpler values.  This was never something I needed for these purposes, so it seemed overkill to have to specify that infrastructure.  EDIT: I was mistaken with this.  QuickCheck gives a default implementation for shrink.  Second, using Arbitrary meant that I had to depend on QuickCheck.  This always seemed too heavy to me because I didn't need any of QuickCheck's property testing infrastructure.  I just wanted to generate a few values and be done.  For a long time these issues were never enough to overcome the activation energy needed to justify releasing a new package.

More recently I realized that the biggest reason QuickCheck wasn't appropriate is because I wanted a different probability distribution than the one that QuickCheck uses.  This isn't about subtle differences between, say, a normal versus an exponential distribution.  It's about the bigger picture of what the probability distributions are accomplishing.  QuickCheck is significantly about fuzz testing and finding corner cases where your code doesn't behave quite as expected.  You want it to generate strings with things like different kinds of quotes to verify that your code escapes things properly, weird unicode characters to check encoding issues, etc.  What I wanted was something that could generate random data that looked realistic for whatever kind of realism my domain needed.  These two things are complementary.  You don't just want one or the other.  Sometimes you need both of them at the same time.  Since you can only have one instance of the Arbitrary type class for each data type, riding on top of QuickCheck wouldn't be enough.  This needed a separate library.  Enter the fake package.

The fake package provides a type class called Fake which is a stripped down version of QuickCheck's Arbitrary type class intended for generating realistic data.  With this we also include a random value generator called FGen which eliminates confusion with QuickCheck's Gen and helps to minimize dependencies. The package does not provide predefined Fake instances for Prelude data types because it's up for your application to define what values are realistic.  For example, an Int representing age probably only needs to generate values in the interval (0,120].

It also gives you a number of "providers" that generate various real-world things in a realistic way.  Need to generate plausible user agent strings?  We've got you covered.  Want to generate US addresses with cities and zip codes that are actually valid for the chosen state?  Just import the Fake.Provider.Address.EN_US module.  But that's not all.  Fake ships with providers that include:
See the full list here.

I tried to focus on providers that I thought would be broadly useful to a wide audience.  If you are interested in a provider for something that isn't there yet, I invite more contributions!  Similar packages exist in a number of other languages, some of which are credited in fake's README.  If you are planning on writing a new provider for something with complex structure, you might want to look at some of those to see if something already exists that can serve as inspiration.

One area of future exploration where I would love to see activity is something building on top of fake that allows you to generate entire fake databases matching a certain schema and ensuring that foreign keys are handled properly.  This problem might be able to make use of fake's full constructor coverage concept (described in more detail here) to help ensure that all the important combinations of various foreign keys are generated.

by mightybyte ( at March 17, 2018 03:38 PM

Philip Wadler

One simple step to increase diversity

One simple step to increase diversity is to ask. From now on, I plan to send all relevant job announcements to Lambda Ladies, specifically by email to the moderators. Above is the Lambda Ladies party at Strange Loop, September 2014. Thank you for existing, ladies!

by Philip Wadler ( at March 17, 2018 01:27 PM

March 16, 2018


Efficiently Improving Test Coverage with Algebraic Data Types

Think of a time you've written tests for (de)serialization code of some kind, say for a data structure called Foo.  If you were using the lowest level of sophistication you probably defined a few values by hand, serialized them, deserialized that, and verified that you ended up with the same value you started with.  In Haskell nomenclature we'd say that you manually verified that parse . render == id.  If you were a little more sophisticated, you might have used the QuickCheck library (or any of the numerous similar packages it inspired in other languages) to verify the parse . render == id property for a bunch of randomly generated values.  The first level of sophistication is often referred to as unit testing.  The second frequently goes by the term property testing or sometimes fuzz testing.

Both unit testing and property testing have some drawbacks.  With unit testing you have to write fairly tedious boilerplate of listing by hand all the values you want to test with.  With property testing you have to write a precise specification of how to generate the different cases randomly.  There is a package called generic-arbitrary that can automatically derive these generators for you generically, but that approach often ends up testing many values that aren't giving you any increase in code coverage.  To see this, consider the example of testing this domain-specific data type:

data MyError e a = Failure e | Success a

Here is what we get using generic-arbitrary to generate test values:

λ import Test.QuickCheck
λ import Test.QuickCheck.Arbitrary.Generic
λ let g = genericArbitrary :: Gen (MyError Int Bool)
λ mapM_ print =<< generate (replicateM 20 g)
Success True
Success True
Success False
Success False
Success True
Failure 17
Failure 8
Failure (-29)
Failure (-14)
Success True
Failure 22
Failure (-2)
Success False
Success True
Success False
Failure (-15)
Success False
Success True
Failure (-5)
Failure (-5)

It is testing the exact same value multiple times.  But even if it had a mechanism for deduplicating the values, do you really need so many different test cases for the serialization of the Int in the Failure case?  I'm testing my serialization code for the MyError type, so I probably don't care about testing the serialization of Int.  I would take it as a given that the libraries I'm using got the serialization for Int correct.  If we take this assumption as a given, then for the above case of testing MyError Int Bool, I would really only care about two cases, the Failure case and the Success case.  Testing serializations for Int and Bool are out of the scope of my concern because from the perspective of my project they are primitives.

What we really want to test is something I call "full constructor coverage".  With this pattern of generation you only generate enough values to exercise each constructor your your type hierarchy once.  This gets you better code coverage while testing many fewer values.  If we wanted exhaustive testing, the number of values you'd need to test are given by this relation:

-- Product type
numCases (a, b) = numCases a * numCases b
-- Sum type
numCases (Either a b) = numCases a + numCases b

This is precisely what the "algebraic" in "algebraic data types" is referring to.  For product types the number of inhabitants is multiplied, for sum types it is added.  However, if we're going for full constructor coverage, the number of values we need to test is reduced to the following relation:

numCases (a, b) = max (numCases a) (numCases b)
numCases (Either a b) = numCases a + numCases b

For complex types, replacing the multiplication with the max greatly reduces the number of values you need to test.  Here's what full constructor coverage looks like for MyError:

λ import Fake
λ let g = unCoverage (gcover :: Coverage (MyError Int Bool))
λ mapM_ print =<< generate (sequence g)
Failure 75
Success False

At this point readers might be thinking, "Who cares?  Computers are fast."  Well, QuickCheck defaults to generating 100 test cases for each property, but complex types can easily make this insufficient.  You can increase how many test cases it generates, but that will make your test suite slower and you probably won't know what the right number is to get the level of coverage you are aiming for.  In my experience slower test suites can cause tangible business impact in production systems in terms of possibly longer downtimes when problems are encountered in production and you have to wait for tests to pass before you can deploy a fix.  At the end of the day, naive randomized test case generation very unlikely to score as well as the full constructor coverage method under the metric of lines of code covered per test case.

The good news is that Haskell's algebraic data types give us exactly what we need to get full constructor coverage in a completely automatic way.  It is a part of the fake package and you can use it today.  You can see example uses in the test suite.  In a dynamically typed language you don't have knowledge of the constructors and forms they take.  All variables can hold any data, so you don't have any knowledge of structure to guide you.  Thanks to types, Haskell knows that structure.

by mightybyte ( at March 16, 2018 06:24 PM

March 15, 2018

FP Complete

Without performance tests, we will have a bad time, forever

Today I was working on a client project when a seemingly innocent refactoring made the program 2x faster.

by Niklas Hambüchen at March 15, 2018 06:36 PM


Armor Your Data Structures Against Backwards-Incompatible Serializations

As almost everyone with significant experience managing production software systems should know, backwards compatibility is incredibly important for any data that is persisted by an application. If you make a change to a data structure that is not backwards compatible with the existing serialized formats, your app will break as soon as it encounters the existing format. Even if you have 100% test coverage, your tests still might not catch this problem. It’s not a problem with your app at any single point in time, but a problem with how your app evolves over time.

One might think that wire formats which are only used for communication between components and not persisted in any way would not be susceptible to this problem. But these too can cause issues if a message is generated and a new version of the app is deployed before the the message is consumed. The longer the message remains in a queue, redis cache, etc the higher the chances of this occurring.

More subtly, if you deploy a backwards incompatible migration, your app may persist some data in the new format before it crashes when it receives the old format. This can leave your system in the horrible state where not only will it not work with the new code, but rolling back to the old code won’t work either because the old code doesn’t support the new serialized format! You have two incompatible serializations active at the same time! Proper migration systems can reduce the chances of this problem occurring, but if your system has any kind of queueing system or message bus, your migrations might not be applied to in-flight messages. Clearly we need something to help us protect against this problem. Enter the armor package.

Armor is a Haskell package that saves serialized versions of your data structures to the filesystem and tests that they can be correctly parsed. This alone at a single point in time verifies that parse . render == id which is a property that you usually want your serializations to have. But in addition to that, armor tracks a version number for your data structures and uses that to accumulate historical serialization formats over time. It stores the serialized bytes in `.test` files that you check into source control. This protects against backwards-incompatible changes and at the same time avoids cluttering up your source code with old versions of the data structure.

Armor does this while being completely agnostic to the choice of serialization library. In fact, it can support any number of different serialization formats simultaneously. For more information check out the literate Haskell tutorial in the test suite.


Inspiration for this package came from Soostone's safecopy-hunit package.

Details were refined in production at Formation (previously Takt).

by mightybyte ( at March 15, 2018 12:38 PM

March 14, 2018

Christopher Allen

How I stream myself coding

Alright so I’ve been uploading streamed videos of myself working on programming projects for awhile now. I occasionally get asked about my setup for this, so I thought I would explain the tools I use. This might be particularly valuable as I am primarily a Linux user where some of the kit for this can be kinda rough. Further, I am very picky about my tools and ergonomics so I didn’t really want to make any sacrifices there in order to stream.

The tools

  • OBS Studio (open broadcaster): I always live stream to and save to a file simultaneously.

  • Hangouts: I pull people into Hangouts that want to ask questions and to be able to see my stream without a time delay. They are not seen nor heard in the actual stream. I repeat their questions for the rest of the stream when I’m going to answer it.

  • VLC: I use this to slice out my second monitor only for the Hangout stream. Hangouts doesn’t know how to “see” a single monitor, but it can see individual applications. With that, I can just pick VLC in order to share a single monitor. Here’s my VLC script for grabbing my middle monitor.

  • Slack: Announce the stream, occasionally get comments from people.

  • live streams go here. Their live streaming and tools around it were better than YouTube’s.

  • YouTube: my video uploads go here. I process the file upload.

  • Three 23" 1080p monitors: This is actually necessary for how I do this. Here’s why:

  • Left monitor (not streamed): Monitoring stream and chat in case any viewers ask questions. OBS Studio. Slack. Odds and ends.

  • Middle monitor (this is what gets streamed): usually my browser workspace (the first workspace in my XMonad configuration) or my Emacs + terminal workspace (the second workspace).

  • Right monitor (not streamed): This is devoted to VLC because I need a whole monitor to recap the middle monitor for Hangouts. This can be ditched if I’m not doing Hangouts as OBS knows how to grab a single monitor. However, I often find a friend or colleague will jump into Hangouts mid-stream so I habitually maintain the same setup so there’s minimal delay mid-stream getting them in.

  • XMonad: absolutely necessary for me to work efficiently. Lets me instantly switch workspaces on my middle monitor without perturbing my VLC workspace or my ‘dashboard’ (OBS/Hangouts/Twitch/etc.) workspace.

  • Heil PR-40 microphone and a Xenyx Q802. Not strictly necessary but stay the hell away from Blue microphones, they’re all irredeemable trash. You’re better off using a built-in mic or a webcam mic. A good middle option is a solid dynamic mic like a Shure SM58 with a USB mixer like my Xenyx. I have not actually tested a Shure SM57 or SM58 with my recording setup but a dynamic mic in general should be much better for your typical home streamer. Condenser mics seem to pick up a lot of extraneous noise that is hard to filter out or control for in a home environment. The Blue Yeti would pick up even relatively subtle background noises like the hum of my air conditioning. I had similar issues with other cheap or “cheap” condenser mics until I did some research and found out about dynamic microphones.

  • Turn on mic monitoring. Hear yourself speak. It’ll help train you out of bad vocal tics and mouth sounds. If you’ve ever pair-programmed with somebody that makes obnoxious mouth sounds then you know how important this is for your streams.

  • Play any background music into your headphones quietly and DO NOT rebroadcast it onto the stream. Almost all of my headphones are open-air cans. This means they aggressively leak sound. I’ve had YouTube yank my videos for copyright claims because a 15 second snippet of a song leaked into the mic recording. Listening to music from speakers is obviously a hard-no. Don’t annoy your viewers.

  • If possible, use a carpeted room. My home office (I exclusively work from home) is carpeted.

After the stream is done, I process the file saved by OBS with some denoising scripts to smooth out the audio. Here’s a dump of a couple of scripts I’ve kicked around:


rm -f audio.wav
avconv -i $VIDEO_FILE -c:a pcm_s16le -vn audio.wav
# normalize-audio -a -9dbFS audio.wav
normalize-audio -a -12dbFS audio.wav
avconv -i $VIDEO_FILE -i audio.wav -map 0:0 -map 1:0 -c:v copy -c:a libvo_aacenc \

# VIDEO_FILE_AUDIO_CLEAN=${VIDEO_FILE%.*}-audio-clean.wav

# ffmpeg -i $VIDEO_FILE -af "highpass=f=200, lowpass=f=3000" $VIDEO_FILE_DENOISED
rm -f noise_audio.wav

# ffmpeg -i $VIDEO_FILE -qscale:v $QSCALE -an $VIDEO_FILE_VIDEO
# ffmpeg -i $VIDEO_FILE -qscale:v $QSCALE $VIDEO_FILE_AUDIO
# ffmpeg -i $VIDEO_FILE -vn -ss 0:00:00 -t 0:00:02 noise_audio.wav
# ffmpeg -i $VIDEO_FILE -vn -ss 0:33:30 -t 0:33:33 noise_audio.wav
# sox noise_audio.wav -n noiseprof

If you’ve got better kit/scripts than I’ve mentioned here, please speak up! I’m always up for improving this stuff. Please contact me if you need clarification on something I’ve mentioned here or if you have questions.

Comment on microphones from my friend Tyler on Twitter

I’ve lightly edited his words here:

I take issue with your statement about blue microphones. I had a Shure SM58 and a blue spark and I liked the blue much better than the SM58. I prefer condenser microphones over dynamic microphones, I just run the gain as low as I can to make it not pick up the background. USB mics are trash.

I never messed with USB mics. I’ve exclusively had XLR microphones since I got started back in 2011. My little brother has a Blue Icicle and that is not the best experience. Constantly picking up his keyboard.

If you can pick you a Shure SM58 for 50 bucks like I did, the thing is nigh indestructible, great deal. The only reason mine doesn’t work is because I broke a solder point and I didn’t want to repair it.

It looks like they refreshed the Blue Spark with a high pass filter and a -20db pad.

My reply:

FWIW I tried setting the gain low on my Blue Yeti, it wasn’t enough. I had to scream at the mic when the gain was low enough not to pick up noise.

At time of writing, it doesn’t seem like the Blue Spark is available any more. (See above about the model refresh)

It’s possible an XLR condenser mic with a mixer could be okay for your purposes, but most of the Blue mics are USB now which severely limits their usefulness. Not my bag of chips but I wanted to include a different viewpoint.

I know this site is a bit of a disaster zone, but if you like my writing or think you could learn something useful from me, please take a look at the Haskell book I've been writing. There's a free sample available too!

March 14, 2018 12:00 AM

March 13, 2018

Edward Z. Yang

Online/offline continuous integration

Raise your hand if you've ever put one of these commands in your continuous integration scripts:

  • apt install somepackage
  • pip install -r requirements.txt or pip install somepkg
  • conda install blah
  • cabal update or cabal install blah
  • git clone
  • wget http://some-website/thingy-latest.tgz

Can you tell what the problem is? These commands are not reproducible: depending on when you run them, they may give different results. More insidiously, most of the time they give you the same result (or, perhaps, a different result that still works for your use case).

I know, we need a reproducible build! The prevailing answer to this problem by tooling authors has been to seize the means of production and replace it with something that is reproducible. If you live the npm/yarn ecosystem, lockfiles ensure all of your dependencies redownload the same way every time (except when it doesn't). If you live in the Stack ecosystem, Stackage distributions ensure that you get the same Hackage package every time you build (except when it doesn't...). If you live in the Nix ecosystem, it means literally replacing the packaging system for everything on your system to achieve reproducibility.

So, it seems:

  1. If you can live entirely within the walled garden of the tools you use, things are pretty reproducible, but you're still on your own when it comes to taking updates on your dependencies.
  2. As soon as you step outside of the garden, it's entirely up to you to ensure reproducibility. The "easy way" out is usually not reproducible.

What if we change the question? We have entered this discussion under the assumption that reproducibility is our terminal value. But it's not: it's the mechanism by which we can achieve other goals. In the setting of continuous integration, what we really care about is a system that gives us signal about whether or not a given change set is correct or breaks things. A non-reproducible build interferes with this goal only in the sense that's its harder to tell if a change set has broken things if some random dependency has self-updated itself and broken your build. If this happens, you are blocked: you won't get clean signal until you fix the dependency problem. Broken window theory demands you drop everything and fix the build.

Clearly, we don't care if our dependencies are getting silently upgraded as development proceeds; in fact, we might prefer it, because "automatic" is less friction than "manual", at least when it works. What we do care about is the ability to block the upgrade if it is known to break us or revert the upgrade if we find out later that it caused some breakage.

Online/offline continuous integration. We traditionally think of the continuous integration build as a single pipeline which, when run from beginning to end, gives us signal about whether or not our code works or not. But I think it is better to think of a CI pipeline as dividing into two phases:

  1. Online environment configuration. In this stage, you download all of the external software that depend on that fiddly third-party world, setting up a complete build environment. Once you are done, you snapshot this environment by some mechanism (e.g., filesystem snapshot or make a Docker image.)
  2. Offline actual build and test. In this stage, within the snapshotted environment from step (1), turn off your Internet connection and run the actual build and test.

The key is that you don't have to run step (1) every build (you didn't want to anyway, for performance reasons.) Instead, the series of immutable snapshots of build environments generated by step (1) gives you the ability to revert or peg to a particular version of all of your dependencies, without having to go and make the universe reproducible. You can have a weekly cronjob rebuilding your environment, running the tests, and only deciding to push the activate snapshot forward if everything passes. You don't have to actually turn off the Internet when you run step (2), but it might help keep you honest.

Think offline. In today's connected world, it's easy to build systems with the assumption that you are always connected to the Internet. Doing so, however, leaves your tool at the mercy of the sound and fury of the real world. By applying a simple principle: "what can I do offline; what must I do online?" we reverse-engineer a design for continuous integration that gives you something almost as good as reproducibility, without forcing you to rewrite the universe. Surely that's worth something.

by Edward Z. Yang at March 13, 2018 02:52 AM

March 12, 2018

Mark Jason Dominus

Quick and dirty prime counting

I've been thinking for a while that I probably ought to get around to memorizing all the prime numbers under 1,000, so that I don't have to wonder about things like 893 all the time, and last night in the car I started thinking about it again, and wondered how hard it would be. There are 25 primes under 100, so presumably fewer than 250 under 1,000, which is not excessive. But I wondered if I could get a better estimate.

The prime number theorem tells us that the number of primes less than is and I think the logarithm is a natural one, but maybe there is some constant factor in there or something, I forget and I did not want to think about it too hard because I was driving. Anyway I cannot do natural logarithms in my head.

Be we don't need to do any actual logarithms. Let's estimate the fraction of primes up to as where is unknown and the base of the logarithm is then unimportant. The denominator scales linearly with the power of , so the difference between the denominators for and is the same as the difference between the denominators for and .

There are 4 primes less than 10, or , so the denominator is 2.5. And there are 25 primes less than 100, so the denominator here is 4. The difference is 1.5, so the denominator for ought to be around 5.5, and that means that about of the numbers up to 1000 are prime. This yields an estimate of 182.

I found out later that the correct number is 186, so I felt pretty good about that.

[ Addendum: The correct number is 168, not 186, so I wasn't as close as I thought. ]

by Mark Dominus ( at March 12, 2018 12:03 AM

March 09, 2018

Neil Mitchell

Safe Library with better Stack Traces

Summary: The safe library now provides error messages with short and informative stack traces on errors.

The Haskell base library contains a number of functions that crash under certain circumstances, e.g. tail []. The safe library attempts to tame these functions, providing tailDef (which uses a default value on []) and tailNote (which gives you a chance to provide some extra information if there is a failure). Since GHC 7.10 there has been support for adding stack traces to exceptions, where if any function with an appropriate annotation calls error it will include some stack trace.

Just over a year ago the safe library introduced a Partial constraint in Safe.Partial to declare that a function is partial, which also provides the annotations for error. The signature of tailNote became:

tailNote :: Partial => String -> [a] -> [a]

This signature has two benefits - it is visible in the documentation and it provides the stack trace if something goes wrong. If you typed tailNote "uhoh" [] in ghci you got:

Prelude Safe> tailNote "uhoh" []
*** Exception: Safe.tailNote [], uhoh
CallStack (from HasCallStack):
error, called at .\Safe\Util.hs:23:44 in safe-0.3.16-9YcgrXj17kg79mfNx7tCoF:Safe.Util
fromNoteModule, called at Safe.hs:65:12 in safe-0.3.16-9YcgrXj17kg79mfNx7tCoF:Safe
fromNote, called at Safe.hs:108:17 in safe-0.3.16-9YcgrXj17kg79mfNx7tCoF:Safe
tailNote, called at <interactive>:5:1 in interactive:Ghci1

Great - we can see the final line says we were on line 5 of the interactive and ran tailNote. Useful, but with the new version of safe it's even better:

*Main> tailNote "uhoh" []
*** Exception: Safe.tailNote [], uhoh
CallStack (from HasCallStack):
tailNote, called at <interactive>:1:1 in interactive:Ghci1

We still get the interesting final line, but all the internal details of safe, e.g. the fact that tailNote calls fromNote have disappeared.

To get the stack traces just add Partial to any function you believe to be partial - it's easy. If you are happy to stick with GHC 8.0 and above you can use HasCallStack from GHC.Stack without depending on safe. I am slowly adding annotations to my packages, for example the extra library has Partial annotations.

Supressing the internal functions was a lot more work. I think it's worth it for a package that is all about nice handling of errors, but I probably won't bother in any of my other packages. The change to the code was going from:

tailNote note = fromNote note "tailNote []" . tailMay


tailNote note x = withFrozenCallStack $ fromNote note "tailNote []" $ tailMay x

The main change is we've added withFrozenCallStack, which freezes the call stack at this point and stops new entries from being accumulated inside the bowels of our library. The other change was to eta-expand the definition by adding x on both sides, so that withFrozenCallStack gets to block the actual error, and not merely a function that later produces an error.

Partial constraints are very powerful, and I hope in time they are adopted universally throughout the Haskell ecosystem. One day I hope the Prelude.tail function will also have a Partial constraint.

by Neil Mitchell ( at March 09, 2018 07:25 PM

Philip Wadler

IOHK is hiring six PLT engineers

IOHK is hiring six Programming Language Theory engineers, to design and implement the smart contract language Plutus and related domain specific languages. Designing scripting languages for smart contracts is a challenging topic, as it is crucial to avoid the sort of exploits that regularly drain Ethereum of tens of millions of dollars worth of cryptocurrency. I am one of the lead designers; two others are Duncan Coutts and Manuel Chakravarty, who are well known to many in this community.

IOHK is one of the leading cryptocurrency firms. Much of its software is implemented in Haskell. All work is open source and publication is encouraged. Indeed, IOHK is unique in that it is committed to basing its development on peer-reviewed research, in cryptography and security as well as in programming languages and formal methods. As Charles Hoskinson, IOHK's CEO, points out, if IOHK succeeds it may impact how software is developed, encouraging others to more seriously consider functional programming, formal methods, and peer-review. IOHK is a distributed company: I am in Edinburgh and Rio de Janeiro; Duncan is in London; Manuel is in Sydney; you may work from wherever you like.

Further details here:

by Philip Wadler ( at March 09, 2018 03:39 PM

March 08, 2018

The GHC Team

GHC 8.4.1 released

The GHC developers are very happy to announce the 8.4.1 release of Glasgow Haskell Compiler. Binary and source distributions can be found at

This is the third major release in the GHC 8 series. As such, the focus of this release is performance, stability, and consolidation. Consequently numerous cleanups can be seen throughout the compiler including,

  • Further refinement of TypeInType, including significant improvements in error messages.
  • Improvements in code generation resulting in noticable performance improvements in some types of programs.
  • Core library improvements, including phase 2 of the Semigroup/Monoid proposal
  • Many improvements to instance deriving
  • The resolution of nearly 300 other tickets

A more thorough list of the changes in this release can be found in the release notes,

There are a few changes in release-engineering matters that should be noted,

  • This is GHC's first release on it's new, accelerated release schedule. From now on GHC will produce one release every six months.
  • While we typically strive to produce OpenBSD builds, the gcc shipped with OpenBSD 6.1 is unfortunately too old to compile this release.
  • FreeBSD builds are still in progress

This release has been the result of approximately six months of work by over one hundred code contributors. Thanks to everyone who has helped in writing patches, testing, reporting bugs, and offering feedback over the last year.

As always, let us know if you encounter trouble.

How to get it

This release can be downloaded from

For older versions see

We supply binary builds in the native package format for many platforms, and the source distribution is available from the same place.


Haskell is a standard lazy functional programming language.

GHC is a state-of-the-art programming suite for Haskell. Included is an optimising compiler generating efficient code for a variety of platforms, together with an interactive system for convenient, quick development. The distribution includes space and time profiling facilities, a large collection of libraries, and support for various language extensions, including concurrency, exceptions, and foreign language interfaces. GHC is distributed under a BSD-style open source license.

A wide variety of Haskell related resources (tutorials, libraries, specifications, documentation, compilers, interpreters, references, contact information, links to research groups) are available from the Haskell home page (see below).

On-line GHC-related resources

Relevant URLs on the World-Wide Web:

Supported Platforms

The list of platforms we support, and the people responsible for them, is here

Ports to other platforms are possible with varying degrees of difficulty. The Building Guide describes how to go about porting to a new platform.


We welcome new contributors. Instructions on accessing our source code repository, and getting started with hacking on GHC, are available from the GHC's developer's site run by Trac.

Community Resources

There are mailing lists for GHC users, develpoers, and monitoring bug tracker activity; to subscribe, use the Mailman web interface.

There are several other Haskell and GHC-related mailing lists on; for the full list, see the lists page.

Some GHC developers hang out on the #ghc and #haskell of the Freenode IRC network, too. See the Haskell wiki for details.

Please report bugs using our bug tracking system. Instructions on reporting bugs can be found here.

by Ben Gamari at March 08, 2018 05:22 PM


Object Oriented Programming in Haskell

<script> function addScript(filename) { var scriptBlock=document.createElement('script') scriptBlock.setAttribute("type","text/javascript") scriptBlock.setAttribute("src", filename) document.getElementsByTagName("head")[0].appendChild(scriptBlock) } addScript("../../../aux/files/oop-in-haskell/fix-fac.js"); addScript("../../../aux/files/oop-in-haskell/skim.js"); addScript("../../../aux/files/oop-in-haskell/counter.js"); </script>

In object oriented programming an object is a value with a well-defined interface. The internal state of the object is closed to the outside world (encapsulation), but the behaviour of an object can be modified by redefining one or more of the functions in the object’s interface, typically through subclasses (inheritance). The internal state of the object is open to, and can be extended by, such subclasses. This is known as the open/closed principle.

For some problem domains this way of modelling is extremely suitable, and so one may wonder if we can take a similar approach in a purely functional language like Haskell, without losing purity. An obvious choice is to model an object as a record of functions that correspond to the public interface to that object. For example, we might define the “interface” of counters as

data Counter = Counter {
      tick    :: Counter
    , display :: Int

and we can define a “constructor” for this class as

mkCounter :: Int -> Counter
mkCounter n = Counter {
      tick    = mkCounter (n + 1)
    , display = n

The problem with this approach is that it does not allow us to “redefine methods” in any meaningful way. In particular, suppose we tried to define a “subclass”1 of counters which doubles the count in the display method:

twice :: Int -> Counter
twice n = (mkCounter n) {
      display = n * 2

If we tried this

display (tick (tick (tick (twice 0))))

we will get 3, not 6. In this blog post we will explain what open recursion is, provide some runnable examples that you can evaluate one step at a time to develop a better intuition for how it works, and show how it can be used to solve the problem above. These results are not new, but we hope that this blog post can serve to make them a bit more accessible to a larger audience.

We then discuss how we can extend this approach using lenses to allow “inheritance” to also extend the “state” of the object.

Explicit fixpoints

Most beginner Haskell students study the factorial function:

fac :: Int -> Int
fac 1 = 1
fac n = n * fac (n - 1)

Many find it confusing because the concept of recursion, or self-reference, is hard to grasp at first. To a seasoned Haskell programmer recursion comes completely naturally, but they too might have to stop and think to answer the following question: can we define fac in such a way that we can later extend it? For example, we’d like to be able to define a function skim that subtracts 1 (i.e. “skims from the top”) from the result of every recursive call; that is, skim should be equivalent to

skim :: Int -> Int
skim 1 = 1
skim n = n * skim (n - 1) - 1

but it should be defined in terms of fac. At the risk of stating the obvious, this is not a solution:

skim :: Int -> Int -- wrong
skim n = fac n - 1

since although it subtracts 1 from the final result of fac, it doesn’t do this at every recursive call.

The problem is that as defined fac cannot be extended; the recursive call to fac is fixed (“resolved”) at the definition point; it will always call itself, no matter how we wrap it. If we want to avoid this, we need to “delay” the resolution of that recursive call. Concretely, instead of calling itself, fac will take an additional parameter:

fac :: (Int -> Int) -> Int -> Int
fac r 1 = 1
fac r n = n * r (n - 1)

The intuition is that when we pass “itself” as that r parameter, we recover the original factorial function. We do that through the use of fix:

fix :: (a -> a) -> a
fix f = f (fix f)

To understand how this works, step through the evaluation2 of the computation of 5 factorial

Prev Next (step Step, Status)


Note that the function we call is fix fac, and that every time we expand that we end up with fac (fix fac); in other words, the recursion argument is the same as the function we call. This corresponds to our intuition above that we must pass the function “itself” as its recursion argument.

Open Recursion

So what is this useful for? Well, it is now possible to define skim in terms of fac:

skim :: (Int -> Int) -> Int -> Int
skim r 1 = 1
skim r n = (fac r n) - 1

Just like fac, skim also takes the function for the recursive call as an argument; this would for example make it possible to introduce further “extensions” later on. Only once we call fix do we “tie the knot”:

Prev Next (step Step, Status)


Note that each time we expand fix skim, we end up with skim (fix skim); in other words, the recursive call that we pass to both skim and fac is fix skim. Delaying the resolution of the recursive call, leaving the recursion “open”, is the key ingredient that makes it possible to adjust the result of fac at every recursive call.


Let’s now go back to the example at the start of this blog post. Hopefully it should now be clear why the definition of twice as stated did not work: the recursion in mkCounter is resolved too early; when we construct a new object in tick, we want to construct a new “twice” object, not a new base object. We can use the exact same open recursion technique to make that happen. The definition of the “interface” itself remains unchanged

data Counter = Counter {
      tick    :: Counter
    , display :: Int

but we add an additional “recursion argument” to mkCounter and twice:

mkCounter :: (Int -> Counter) -> (Int -> Counter)
mkCounter self n = Counter {
      tick    = self (n + 1)
    , display = n

twice :: (Int -> Counter) -> (Int -> Counter)
twice self n = (mkCounter self n) {
      display = n * 2

This works in precisely the same way that skim did, except that instead of subtracting 1 from the result of each recursive call, we now override display after each recursive call to mkCounter. Step through the evaluation of

display (tick (tick (tick (fix twice 0))))

to see this happening in detail:

Prev Next (step Step, Status)


Note that, as before, fix twice evaluates to twice (fix twice), so that the recursion parameter that we pass to both mkCounter and twice will be fix twice; thus, although we do not redefine tick, it still constructs the right kind of object.

Recap: Lenses

Before we continue discussing how we can extend the state of the object, we will briefly recap lenses. If this is old hat to you, feel free to skip this section.

Lenses are a big topic in their own right, and there is no way we can do them justice in this blog post. We will recap the concepts we need for our purposes, but if you have never seen lenses before, this section may not be the most accessible introduction.

A Lens a b is essentially a way to “access” a small part (of type b) of a larger structure (of type a); we can both get that smaller part and we can modify it:

data Lens a b = Lens {
      get    :: a -> b
    , modify :: (b -> b) -> (a -> a)

Lenses form a category; all that means is that we can always construct a lens from a structure to itself, and that we can compose lenses:

instance Category Lens where
  id     = Lens id id
  l . l' = Lens (get l . get l') (modify l' . modify l)

As an example, here are the lenses for the first and second component of a pair:

_1 :: Lens (a, b) a
_1 = Lens fst first

_2 :: Lens (a, b) b
_2 = Lens snd second

(where first and second come from Data.Bifunctor).

Extending state

When we inherit from a class in object oriented programming, in addition to redefining some functions we can also extend the state of the object: we can add more fields. Let’s see how we can do this in our approach to modelling objects. Let’s start with a minor extension of the Counter interface:

data Counter = Counter {
      tick    :: Counter
    , tock    :: Counter
    , display :: String

In our “base” implementation, both tick and tock just increment the counter’s internal state:

mkCounter :: (Int -> Counter) -> (Int -> Counter)
mkCounter self n = Counter {
      tick    = self (n + 1)
    , tock    = self (n + 1)
    , display = show n

But now suppose we want to “inherit” from this base implementation, and introduce a “new field” so that we can count the ticks and tocks separately. If we follow the same approach as above, we’ll find that we’ll soon get stuck:

ticktock :: ((Int, Int) -> Counter) -> (Int, Int) -> Counter
ticktock self (n, m) = mkCounter (self . _reconstruct) n

The ticktock counter needs two integers in its state, for the two separate counters. But which self do we pass to mkCounter? We can’t just pass ticktock’s own self argument directly, because the types don’t line up. In the definition above we have left open a hole _reconstruct :: Int -> (Int, Int). That is, we somehow have to reconstruct the “extended” state of the subclass from just the state of the superclass. Why is this necessary? Well, let’s trace the constructor calls for the first tick after the ticktock counter is constructed:

  • We start with fix ticktock (0, 0)
  • This unfolds to ticktock (fix ticktock) (0, 0)
  • The ticktock constructor calls mkCounter (fix ticktock . reconstruct) 0.
  • Notice what happens in tick now: tick calls self (n + 1), which expands to

    (fix ticktock . reconstruct) (n + 1)


    fix ticktock (reconstruct (n + 1))

    This means that although tick is constructing a ticktock counter (rather than a base mkCounter), every time we call tick the state of the subclass is reconstructed from the state of the superclass.

Clearly, this is not going to work. Instead what we need to do is parameterize the constructors with the state of the object; something like

mkCounter :: (st -> Counter) -> (st -> Counter)

Of course, this signature as it stands can’t quite work, because now mkCounter doesn’t know anything about st. To make it work, we will pass a lens to mkCounter that gives it access to the state it needs:

mkCounter :: Lens st Int
          -> (st -> Counter) -> (st -> Counter)
mkCounter l self st = Counter {
      tick    = self (modify l (+ 1) st)
    , tock    = self (modify l (+ 1) st)
    , display = show (get l st)

In object oriented programming, subclasses can only extend the state of the superclass; they cannot remove fields. This is also the intuition of the lens parameter: it tells the constructor of the “superclass”: the state may have been extended, but it will at least contain the state that you need.

With this generalization we can now define ticktock:

ticktock :: Lens st (Int, Int)
         -> (st -> Counter) -> (st -> Counter)
ticktock l self st = (mkCounter (_1 . l) self st) {
      tock    = self (modify (_2 . l) (+ 1) st)
    , display = show (get l st)

Like mkCounter, ticktock gets a lens parameter (because its state may later be extended again). The self parameter that we pass to mkCounter is now just the unmodified self parameter that ticktock gets; the only difference is that the lens argument we pass to ticktock is the composition of the lens argument to ticktock with _1, essentially telling mkCounter that the first component of the pair or integers in ticktock is the state that mkCounter should use.

When we now evaluate

display (tick (tock (tick (fix (ticktock id) (0, 0)))))

we will get

(2, 1)

as expected.


It is important when writing code to choose the right way to model the problem domain. For some problem domains the most natural way to model it is object oriented programming. Of course, since we are Haskell programmers, we’d ideally like this to be referentially transparent object oriented programming, with no side effects.

Object oriented programming in Haskell has a long history, with encodings ranging from the very ambitious in Haskell’s Overlooked Object System by Oleg Kiselyov and Ralf Lämmel, to the very minimal in the use of OOP in the design of the Yi editor as described by JP Bernardy, and everything in between, such as the recent blog post series on Object-Oriented Haskell by Tobias Dammers.

The use of open recursion to model inheritance is well-known; for instance, it is described in Chapter 18, Case Study: Imperative Objects in Pierce’s excellent book Types and Programming Languages. It is also a key component in Bruno Oliveira’s paper The Different Aspects of Monads and Mixins. Of course, the use of explicit fixpoints has myriad other uses also; for instance, explicit fixpoints at the type level is what makes designs such as Data types à la carte by Wouter Swierstra possible. Despite being well-known, it can nonetheless be confusing; our hope is that this blog post can help to make the technique more accessible.

The use of lenses as we presented it in this blog post to allow “subclasses” to extend the state is to the best of my knowledge novel, although Jonathan Brachthäuser’s master’s thesis on Modularization of Algorithms on Complex Data Structures in Scala comes close. In hindsight it seems a very natural solution to the problem. It also provides a very direct implementation of the open/closed principle, where subclasses have full access to the object’s internal state, but only the public API is visible to external clients.

Interestingly, it also generalizes subtyping in traditional object oriented languages. While encapsulation (state hiding) in OOP languages means that an object’s internal state can be changed without affecting clients of that object, it is not possible to change an object’s internal representation in subclasses. In our approach this is perfectly possible, as long as there is a lens from the state of the subclass to the state of the superclass; in general, this lens can be arbitrary complex (although efficiency should be considered, of course).


  1. Actually, this is closer to modelling prototype based inheritance, like in JavaScript or Self, rather than class-based inheritance. We will ignore this difference in this blog post, since class-based inheritance is much better known.

  2. Added a seq node just to make the evaluation a little easier to understand (no need to allocate a thunk).

by edsko at March 08, 2018 09:58 AM

Jasper Van der Jeugt

Haskell Design Patterns: The Handle Pattern


I’d like to talk about a design pattern in Haskell that I’ve been calling the Handle pattern. This is far from novel – I’ve mentioned this before and the idea is definitely not mine. As far as I know, in fact, it has been around since basically forever1. Since it is ridiculously close to what we’d call common sense2, it’s often used without giving it any explicit thought.

I first started more consciously using this pattern when I was working together with Simon Meier at Better (aka erudify). Simon did a writeup about this pattern as well. But as I was explaining this idea again at last week’s HaskellerZ meetup, I figured it was time to do an update of that article.

The Handle pattern allows you write stateful applications that interact with external services in Haskell. It complements pure code (e.g. your business logic) well, and it is somewhat the result of iteratively applying the question:

  • Can we make it simpler?
  • Can we make it simpler still?
  • And can we still make it simpler?

The result is a powerful and simple pattern that does not even require Monads3 or Monad transformers to be useful. This makes it extremely suitable for beginners trying to build their first medium-sized Haskell application. And that does not mean it is beginners-only: this technique has been applied successfully at several Haskell companies as well.

Table of contents

  1. Introduction
  2. Context
  3. The module layout
    1. A Database Handle
    2. Creating a Handle
    3. Destroying a Handle
    4. Reasonable safety
    5. Summary of the module layout
  4. Handle polymorphism
    1. A Handle interface
    2. A Handle implementation
  5. Compared to other approaches


In Haskell, we try to capture ideas in beautiful, pure and mathematically sound patterns, for example Monoids. But at other times, we can’t do that. We might be dealing with some inherently mutable state, or we are simply dealing with external code which doesn’t behave nicely.

In those cases, we need another approach. What we’re going to describe feels suspiciously similar to Object Oriented Programming:

  • Encapsulating and hiding state inside objects
  • Providing methods to manipulate this state rather than touching it directly
  • Coupling these objects together with methods that modify their state

As you can see, it is not exactly the same as Alan Kay’s original definition of OOP4, but it is far from the horrible incidents that permeate our field such as UML, abstract factory factories and broken subtyping.

Before we dig in to the actual code, let’s talk about some disclaimers.

Pretty much any sort of Haskell code can be written in this particular way, but that doesn’t mean that you should. This method relies heavily on IO. Whenever you can write things in a pure way, you should attempt to do that and avoid IO. This pattern is only useful when IO is required.

Secondly, there are many alternatives to this approach: complex monad transformer stacks, interpreters over free monads, uniqueness types, effect systems… I don’t want to claim that this method is better than the others. All of these have advantages and disadvantages, so one must always make a careful trade-off.

The module layout

For this pattern, we’ve got a very well-defined module layout. I believe this helps with recognition which I think is also one of the reasons we use typeclasses like Monoid.

When I’m looking at the documentation of libraries I haven’t used yet, the types will sometimes look a bit bewildering. But then I see that there’s an instance Monoid. That’s an “Aha!” moment for me. I know what a Monoid is. I know how they behave. This allows me to get up to speed with this library much faster!

Using a consistent module layout in a project (and even across projects) provides, I think, very similar benefits to that. It allows new people on the team to learn parts of the codebase they are yet unfamiliar with much faster.

A Database Handle

Anyway, let’s look at the concrete module layout we are proposing with this pattern. As an example, let’s consider a database. The type in which we are encapsulating the state is always called Handle. That is because we design for qualified import.

We might have something like:

The internals of the Handle typically consist of static fields and other handles, MVars, IORefs, TVars, Chans… With our Handle defined, we are able to define functions using it. These are usually straightforward imperative pieces of code and I’ll omit them for brevity5:

Some thoughts on this design:

  1. We call our functions createUser rather than databaseCreateUser. Again, we’re working with qualified imports so there’s no need for “C-style” names.

  2. All functions take the Handle as the first argument. This is very important for consistency, but also for polymorphism and code style.

    With code style, I mean that the Handle is often a syntactically simpler expression (e.g. a name) than the argument (which is often a composed expression). Consider:


  3. Other Handles (e.g. Logger.Handle) are stored in a field of our Database.Handle. You could also remove it there and instead have it as an argument wherever it is needed, for example:

    I usually prefer to put it inside the Handle since that reduces the amount of arguments required for functions such as createUser. However, if the lifetime of a Logger.Handle is very short6, or if you want to reduce the amount of dependencies for new, then you could consider doing the above.

  4. The datatypes such as Mail may be defined in this module may even be specific to this function. I’ve written about ad-hoc datatypes before.

Creating a Handle

I mentioned before that an important advantage of using these patterns is that programmers become “familiar” with it. That is also the goal we have in mind when designing our API for the creation of Handles.

In addition to always having a type called Handle, we’ll require the module to always have a type called Config. This is where we encode our static configuration parameters – and by static I mean that we shouldn’t have any IORefs or the like here: this Config should be easy to create from pure code.

We can also offer some way to create a Config. This really depends on your application. If you use the configurator library, you might have something like:

On the other hand, if you use aeson or yaml, you could write:

You could even use a Monoid to support loading configurations from multiple places. But I digress – the important part is that there is a type called Config.

Next is a similar pattern: in addition to always having a Config, we’ll also always provide a function called new. The parameters follow a similarly strict pattern:

Inside the new function, we can create some more IORefs, file handles, caches… if required and then store them in the Handle.

Destroying a Handle

We’ve talked about creation of a Handle, and we mentioned the normal functions operating on a Handle (e.g. createUser) before. So now let’s consider the final stage in the lifetime of Handle.

Haskell is a garbage collected language and we can let the runtime system take care of destroying things for us – but that’s not always a great idea. Many resources (file handles in particular come to mind as an example) are scarce.

There is quite a strong correlation between scarce resources and things you would naturally use a Handle for. That’s why I recommend always providing a close as well, even if does nothing. This is a form of forward compatibility in our API: if we later decide to add some sort of log files (which needs a close), we can do so without individually mailing all our module users that they now need to add a close to their code.

Reasonable safety

When you’re given a new and close, it’s often tempting to add an auxiliary function like:

I think this is a great idea. In fact, it’s sometimes useful to only provide the withHandle function, and hide new and close in an internal module.

The only caveat is that the naive implementation of this function:

Is wrong! In any sort of withXyz function, you should always use bracket to guard against exceptions. This means the correct implementation is:

Well, it’s even shorter! In case you want more information on why bracket is necessary, this blogpost gives a good in-depth overview. My summary of it as it relates to this article would be:

  1. Always use bracket to match new and close
  2. You can now use throwIO and killThread safely

It’s important to note that withXyz functions do not provide complete safety against things like use-after-close our double-close. There are many interesting approaches to fix these issues but they are way beyond the scope of this tutorial – things like Monadic Regions and The Linearity Monad come to mind. For now, we’ll rely on bracket to catch common issues and on code reviews to catch team members who are not using bracket.

Summary of the module layout

If we quickly summarise the module layout, we now have:

This is a well-structured, straightforward and easy to learn organisation. Most of the Handles in any application should probably look this way. In the next section, we’ll see how we can build on top of this to create dynamic, customizable Handles.

Handle polymorphism

It’s often important to split between the interface and implementation of a service. There are countless ways to do this in programming languages. For Haskell, there is:

  • Higher order functions
  • Type classes and type families
  • Dictionary passing
  • Backpack module system
  • Interpreters over concrete ASTs

The list is endless. And because Haskell on one hand makes it so easy to abstract over things, and on the other hand makes it possible to abstract over pretty much anything, I’ll start this section with a disclaimer.

Premature abstraction is a real concern in Haskell (and many other high-level programming languages). It’s easy to quickly whiteboard an abstraction or interface and unintentionally end up with completely the wrong thing.

It usually goes like this:

  1. You need to implement a bunch of things that look similar
  2. You write down a typeclass or another interface-capturing abstraction
  3. You start writing the actual implementations
  4. One of them doesn’t quite match the interface so you need to change it two weeks in
  5. You add another parameter, or another method, mostly for one specific interface
  6. This causes some problems or inconsistencies for interfaces
  7. Go back to (4)

What you end up with is a leaky abstraction that is the product of all concrete implementations – where what you really wanted is the greatest common divisor.

There’s no magic bullet to avoid broken abstractions so my advice is usually to first painstakingly do all the different implementations (or at least a few of them). After you have something working and you have emerged victorous from horrible battles with the guts of these implementations, then you could start looking at what the different implementations have in common. At this point, you’ll also be a bit wiser about where they differ – and you’ll be able to take these important details into account, at which point you retire from just being an idiot drawing squares and arrows on a whiteboard.

This is why I recommend sticking with simple Handles until you really need it. But naturally, sometimes we really need the extra power.

A Handle interface

So let’s do the simplest thing that can possibly work. Consider the following definition of the Handle we discussed before:

What’s the type of createUser now?

It’s exactly the same as before! This is pretty much a requirement: it means we can move our Handles to this approach when we need it, not when we envision that we will need it at some point in the future.

A Handle implementation

We can now create a concrete implementation for this abstract Handle type. We’ll do this in a module like MyApp.Database.Postgres.

The Config datatype and the new function have now moved to the implementation module, rather than the interface module.

Since we can have any number of implementation modules, it is worth mentioning that we will have multiple Config types and new functions (exactly one of each per implementation). Configurations are always specific to the concrete implementation. For example, an sqlite database may just have a FilePath in the configuration, but our Postgres implementation will have other details such as port, database, username and password.

In the implementation of new, we simply initialize a Handle:

Of course, we can manually float out the body of createUser since constructing these large records gets kind of ugly.

Compared to other approaches

We’ve presented an approach to modularize the effectful layer of medium- to large-scaled Haskell applications. There are many other approaches to tackling this, so any comparison I come up with would probably be inexhaustive.

Perhaps the most important advantage of using Handles is that they are first class values that we can freely mix and match. This often does not come for free when using more exotic strategies.

Consider the following type signature from a Hackage package – and I do not mean to discredit the author, the package works fine but simply uses a different approach than my personal preference:

I’m a fairly experienced Haskeller and it still takes me a bit of eye-squinting to see how this will fit into my application, especially if I want to use this package with other libraries that do not use the Sink/Source or MonadBaseControl abstractions.

It is somewhat obvious that one running call to runJsonRpcT corresponds to being connected to one JSON-RPC endpoint, since it takes a single sink and source. But what if we want our application to be connected to multiple endpoints at the same time?

What if we need to have hundreds of thousands of these, and we want to store them in some priority queue and only consider the most recent ones in the general case. How would you go about that?

You could consider running a lightweight thread for every runJsonRpcT, but that means you now need to worry about thread overhead, communicating exceptions between threads and killing the threads after you remove them. Whereas with first-class handles, we would just have a HashPSQ Text Int JsonRpc.Handle, which is much easier to reason about.

So, I guess one of the oldest and most widely used approaches is MTL-style monad transformers. This uses a hierarchy of typeclasses to represent access to various subsystems.

I love working with MTL-style transformers in the case of pure code, since they often allow us to express complex ideas concisely. For effectful code, on the other hand, they do not seem to offer many advantages and often make it harder to reason about code.

My personal preference for writing complex effectful code is to reify the effectful operations as a datatype and then write pure code manipulating these effectful operations. An interpreter can then simply use the Handles to perform the effects. For simpler effectful code, we can just use Handles directly.

I have implemented a number of these patterns in the (ever unfinished) example web application fugacious, in case you want to see them in action or if you want a more elaborate example than the short snippets in this blogpost. Finally, I would like to thank Alex Lang and Nicolas Mattia for proofreading, and Titouan Vervack for many corrections and typos.

<section class="footnotes">
  1. Well, System.IO.Handle has definitely been around for a while.

  2. If you’re reading this article and you’re thinking: “What does this guy keep going on about? This is all so obvious!” – Well, that’s the point!

  3. It does require IO, but we don’t require thinking about IO as a Monad. If this sounds weird – think of lists. We work with lists all the time but we just consider them lists of things, we don’t constantly call them “List Monad” or “The Free Monoid” for that matter.

  4. And indeed, we will touch on a common way of encoding OOP in Haskell – creating explicit records of functions – but we’ll also explain why this isn’t always necessary.

  5. If you want to see a full example, you can refer to this repository that I have been using to teach practical Haskell.

  6. With a short lifetime I mean you would create a new Logger.Handle for every call to createUser. But even in that case you could consider turning Logger.Handle into something like a resource pool, from which you could request a new concrete logging interface to log things. It really depends on your use case in the end…


by Jasper Van der Jeugt at March 08, 2018 12:00 AM

March 05, 2018

Brent Yorgey

Beeminder guest post on Beeminding All The Things

I’ve written about my use of Beeminder a few times here before. I just wrote a guest post for the Beeminder blog entitled Beeminding All The Things, in which I explain some lessons learned and best practices for dealing with a very large number of Beeminder goals (50, in my case)—and why you might even consider having so many in the first place. If thinking about tools for productivity is your cup of tea, you might be interested to check it out.

by Brent at March 05, 2018 04:05 AM

March 04, 2018

Mateusz Kowalczyk

Element-wise vector addition microbenchmarks.

Posted on March 4, 2018 by Fūzetsu

At work we have found a need for a function

This was a fairly hot function so making it fairly quick would be nice. I have at the time written a small benchmark in the project but it was difficult to measure it with everything else that was going on and we only had a limited data set. In benchmarks we use non-empty list of vectors instead (which is always fine with us, removes runtime check and a failure case). There is also an assumptions that all the vectors are uniform length.

Today I finally broke through and decided to just isolate the function and write a benchmark-suite. I have published the code on GitHub. I don’t plan on putting it on Hackage as it’s not a library nor a useful executable.

What I don’t do in the repository is discuss results that I have gotten. I will be referring to functions defined in the repository. I have copied and pasted them at the bottom of the post for quick reference.

Single vector, many elements

The input sample is a single vector with 1 million elements.

FoldZip, RecurseZip, RecurseZipWithN all perform about the same: after all their job is to simply return the vector.

The ST-family of implementations suffers. It’s pretty easy to guess why. Let’s look at one of the implementations:

We just thawed the vector (resulting in copying) when it was not necessary: in the case of single vector input, we should just yield the vector. I think we could fix this two ways:

  1. Add vsum (v :| []) = v case. This will handle the awkward edge case.
  2. Use modify. This will only perform a copy of the vector when needed. In this case it won’t be needed, we won’t perform any destructive updates and should be able to get off scot-free with low cost.

Let me add both of these ways and let’s compare results. As I’m lazy, I will only add it for the UncheckedStFromBack variant as I don’t want to add 8 new modules when this is the “superior” version as you will see from the results later.

benchmarking Data.Vector.Unboxed/1x1000000/FoldZip
time                 9.012 ns   (8.963 ns .. 9.072 ns)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 7.168 ns   (6.909 ns .. 7.418 ns)
std dev              785.4 ps   (658.9 ps .. 962.8 ps)
variance introduced by outliers: 93% (severely inflated)

benchmarking Data.Vector.Unboxed/1x1000000/RecurseZip
time                 9.251 ns   (9.203 ns .. 9.317 ns)
                     0.997 R²   (0.994 R² .. 0.999 R²)
mean                 7.325 ns   (7.030 ns .. 7.613 ns)
std dev              785.3 ps   (638.0 ps .. 961.4 ps)
variance introduced by outliers: 93% (severely inflated)

benchmarking Data.Vector.Unboxed/1x1000000/RecurseZipWithN
time                 8.712 ns   (8.554 ns .. 8.997 ns)
                     0.996 R²   (0.990 R² .. 1.000 R²)
mean                 6.846 ns   (6.525 ns .. 7.243 ns)
std dev              964.1 ps   (717.2 ps .. 1.391 ns)
variance introduced by outliers: 96% (severely inflated)

benchmarking Data.Vector.Unboxed/1x1000000/UncheckedStFromBack
time                 514.6 μs   (507.5 μs .. 523.4 μs)
                     0.997 R²   (0.992 R² .. 0.999 R²)
mean                 412.5 μs   (393.9 μs .. 431.3 μs)
std dev              46.70 μs   (37.85 μs .. 58.42 μs)
variance introduced by outliers: 81% (severely inflated)

benchmarking Data.Vector.Unboxed/1x1000000/UncheckedStFromBackModify
time                 1.064 ms   (991.1 μs .. 1.143 ms)
                     0.979 R²   (0.970 R² .. 0.999 R²)
mean                 796.9 μs   (750.5 μs .. 858.1 μs)
std dev              145.4 μs   (104.6 μs .. 192.7 μs)
variance introduced by outliers: 91% (severely inflated)

benchmarking Data.Vector.Unboxed/1x1000000/UncheckedStFromBackBailEmpty
time                 11.01 ns   (10.68 ns .. 11.33 ns)
                     0.990 R²   (0.984 R² .. 0.994 R²)
mean                 9.270 ns   (8.907 ns .. 9.716 ns)
std dev              1.129 ns   (926.8 ps .. 1.456 ns)
variance introduced by outliers: 94% (severely inflated)

Well. modify version was disappointing. It’s even worse than the original by a factor of two. I suppose it shows I don’t understand it and I will need to study what it actually does. Here’s the implementation I chose for it:

The good news is that bailing out on empty tail helps a lot and the check only takes extra 2ns. I think it’s worth it if the alternative is paying 500µs on a hit. It of course depends on the use-case: if you’re the only consumer of the function and you know you’ll always have multiple vectors, maybe you want to save the 2ns.

If you’re interested in the numbers, see the html report for this case or the textual output.

Many vectors, single element

The input sample is 1 million vectors with a single element each.

Some weirdness starts here: RecurseZipWithN is the fastest by far for boxed vectors. It is however by far the slowest for storable and unboxed vectors.

I wonder if the size of the function is hindering some optimisations. Let’s add a new variant: one that still does zipWith6 but if there is not enough work left, it folds over the rest of the input.

Slightly better for boxed case and even worse for storable and unboxed cases. We could try with zipWith3 but I suspect it would not help much.

The UncheckedStFromBackBailEmpty variant from previous section actually performs even better than the original though only marginally. The modify variant is comparable to the original so that’s hopeful at least. The ST variants are by far the fastest for this case with bail empty being the best but not by a lot. All are acceptable.

If you’re interested in the numbers, see the html report for this case or the textual output.

Many vectors, many elements

The input sample is 5000 vectors with 5000 elements and the results are pretty much the same as for the previous case. Or so I would have wanted to say but then:

Interesting. I don’t want to sit through another minutes long run (boxed vectors take 7 seconds per iteration on 5000x5000 compared to 20ms for unboxed and 50ms for storable) so let’s drop down to 1000x1000. I definitely will have to keep that crash in mind. I wonder why it crashed there and not in Storable or boxed. Very bizarre. I’m actually guessing it might a bug in criterion considering the weird index (overflow?) and that it came up half way through its printing. Actually that seems to be exactly it: here’s the (fixed) criterion issue. Fixed two months ago at that. I guess I can ask. I could update criterion through stack.yaml but I don’t want to invalidate previous results in middle of the experiment. Let’s go to 1000x1000.

The results are very similar to many vectors, single element case, just scaled differently. zipWithN solutions are still the fastest for boxed vectors (factor of 2x better over others) and just completely awful for storable and unboxed (factor of 20-30x worse compared to others).

If you’re interested in the numbers, see the html report for this case or the textual output.

200 vectors, 200 elements

Out of curiousity I also ran with a smaller data set of 200x200. With this amount of data the zipWithN versions are actually worse even for boxed vectors. The usual ST bail empty version is the fastest: well, on par with UncheckedStFromFront which seems to have taken the lead but only within a small margin. It seems that there’s a point for unboxed vectors where zipWithN functions stop being the worst and become the best. Perhaps it’s all the other functions that degrade with larger inputs.

For unboxed and storable vectors the story is the usual: ST variants win with unchecked ST from back versions coming out on top.

If you’re interested in the numbers, see the html report for this case or the textual output.

General remarks

Comments with regards to the overall process

Data generation

The data generation is nothing exotic:

As we’re using criterion (and weigh for allocations), this is evaluated to normal form before being passed into the benchmark.

Number gathering

It would take too long to run full set of benchmarks every time I wanted to make a change. I have started with the functions listed below in the appendix as the baseline and only ran the full set of criterion benchmarks once. The number of benchmark cases that run is 3 * length sizes * length vsumFunctions. This number can grow fairly fast and especially boxed vector benchmarks take a long time.

Therefore once I had a base, I had re-ran benchmarks for the cases I was interested in for this blog post. These are the numbers you see above. You can see the original full-run numbers if you wish: text version and (WARNING: this report is rather large and will potentially hang your browser. It’s not very useful to look at either because every time is on the same diagram and boxed vector operations for large inputs completely dwarf the rest making it useless as visual aid) html version. You’ll note that the ST functions to deal with singleton cases are missing as well as zipWith6 variant of zipWithN: these functions were introduced during this blog post.

Memory usage

There are also allocation numbers available. These are in full as they only need to run a single iteration per case. You can see them here. The numbers are fairly uniform. zipWithN cases seem to have a little less live data sticking around but pay in significant increase in number of garbage collections. All other differences between the allocation numbers are fairly negligible.

Contrived test cases

We’re benchmarking the speed of the functions on their own. They do not live with other code and don’t benefit from any optimisations one might get if they were inlined into real codebase. As always, take these numbers with a grain of salt. They are at best like-for-like comparisons. In real code performance may change due to fusion &c. Write benchmarks for your application before changing your code based on microbenchmarks here (or anywhere).

Different vector types

Boxed vectors seem to behave very differently performance-wise compared to storable and unboxed vectors. It’s unclear to me as to why. Of course I expect them to be much slower but it makes me curious why they don’t scale in the same way as the other vector types. Why are the ST functions worse than zipWithN variants for larger input data for boxed vectors? This may be an investigation for another day.

Specialisation and inlining

I had to explicitly specialise the functions otherwise GHC worked on the unspecialised generic versions and the results were quite a lot different. This is a standard practice anyway so remember to do it in your code too. Remember to add INLINABLE too as SPECIALISE pragmas turn off inlining by default. In real code, we may instead benefit from INLINE over SPECIALISE. For more information on these things check out this excellent post on GHC optimisation and fusion.

ST traversal from the back and from the front

Some of the ST functions presented have two variants: one that traverses vectors from the back and one from the front. The reason why I’m measuring both is that it does seem to make a noticable difference with traversal from the back usually winning slightly. I have tried this because I was told fairly recently that checking if something is zero can be faster than comparing to another number as there are special instructions to do that. I do not know if that’s what’s happening or if it is true but it seems faster.

Non-exhaustive examples

There are multiple other ways we could write vsum. A simple example is instead of using explicit indexing into vector traversals in ST, just use imapM_. If I remember correctly this one optimises to the same loop as forward traversals but of course you’re relying on optimisations. It would be very good to add more cases. Another example is aforementioned zipWithN for N other than 3. You never know. Mixing existing examples is also possible: we can mix bail empty and modify: we always bail on empty tail (so we don’t pay the price of pointless modify which we found out to be quite bad) but we may receive benefits of modify in common case, if there are any.

Additions encouraged and most welcome.

hmatrix, repa, accelerate, parallel, …

None of the fancy parallel libraries were used. Not even the not fancy ones. Mostly because I was lazy but in big part because those usually only play nicely together in certain scenarios. In benchmarks on real codebase, vsum we had there was faster only up to the point at which hmatrix started winning out for rather large vectors. I want to avoid poorly benchmarking these libraries where data conversions &c. would completely skew the data. Often the rest of your program already deals in those formats.

RTS, hardware

No fancy RTS flags were passed, only -N. This means parallel GC was on &c. Everything was on fair ground however so that’s fine.

The numbers presented here were ran on i7 6770k, 64GB rig, NixOS rig.


The general winner has emerged and it is

This performed very marginally worse for the unusual case of a single vector (2ns loss). It actually performed very marginally better than the unchecked cases on other inputs. Rarely some other ST function would match it or come very slightly ahead: the differences can easily be dismissed as noise in measurements. Note this function is quite unsafe and will do awful, horrible things if any of the vectors is smaller than the first one. This assumption is OK for our use-case and can be easily statically verified. If this was to be a library function, more care should be taken. At the very least use ! for indexing instead and crash the application rather than leaking memory contents to an attacker.

For boxed vectors, zipWithN versions were on top after a certain amount of data. If you’re using those vectors, be wary. You shouldn’t be using them for numerical data usually anyway.

I would definitely like to understand why zipWithN versions are so badly performing as well as why we see such a discrepancy in patterns for boxed vectors.

I would also like to confirm the back/front traversal speed reasons. If there’s interest I might do it in another post. It might just be the case of peeking into the generated ASM. I would also like to repeat the experiments with LLVM as well as play with different compilation flags to both GHC and LLVM.

Appendix: Functions used in the benchmarking

Fold zip

This is a straight forward zip going vector by vector.

Recurse zip

Recurse zipWithN

This is a recursive implementation using up to zipWith6 (as that’s the highest defined in the vector library).

Checked ST from back

Checked ST from front

Unchecked ST from back (security vulnerability edition)

Unchecked ST from front (security vulnerability edition)

March 04, 2018 10:18 PM

Dominic Steinitz

Implicit Runge Kutta and GNU Scientific Library


These are some very hasty notes on Runge-Kutta methods and IRK2 in particular. I make no apologies for missing lots of details. I may try and put these in a more digestible form but not today.

Some Uncomprehensive Theory

In general, an implicit Runge-Kutta method is given by

\displaystyle y_{n+1} = y_n + h \sum_{i=1}^s b_i k_i


\displaystyle k_i = f\left( t_n + c_i h,\ y_{n} + h \sum_{j=1}^s a_{ij} k_j \right), \quad i = 1, \ldots, s


\displaystyle \sum_{j=1}^{i-1} a_{ij} = c_i \text{ for } i=2, \ldots, s

Traditionally this is written as a Butcher tableau:

\displaystyle \begin{array}{c|cccc} c_1 & a_{11} & a_{12}& \dots & a_{1s}\\ c_2 & a_{21} & a_{22}& \dots & a_{2s}\\ \vdots & \vdots & \vdots& \ddots& \vdots\\ c_s & a_{s1} & a_{s2}& \dots & a_{ss} \\ \hline & b_1 & b_2 & \dots & b_s\\ & b^*_1 & b^*_2 & \dots & b^*_s\\ \end{array}

and even more succintly as

\displaystyle \begin{array}{c|c} \mathbf{c}& A\\ \hline & \mathbf{b^T} \\ \end{array}

For a Gauß-Legendre method we set the values of the c_i to the zeros of the shifted Legendre polynomials.

An explicit expression for the shifted Legendre polynomials is given by

\displaystyle \tilde{P}_n(x) = (-1)^n \sum_{k=0}^n \binom{n}{k} \binom{n+k}{k} (-x)^k

The first few shifted Legendre polynomials are:

\displaystyle \begin{array}{r|r} n & \tilde{P}_n(x) \\ \hline 0 & 1 \\ 1 & 2x-1 \\ 2 & 6x^2-6x+1 \\ 3 & 20x^3-30x^2+12x-1 \\ 4 & 70x^4-140x^3+90x^2-20x+1 \\ 5 & 252x^5 -630x^4 +560x^3 - 210 x^2 + 30 x -1 \end{array}


\displaystyle q(t) \triangleq \prod_{j=0}^s (t - c_j) \quad {\text and} \quad q_l \triangleq \frac{q(t)}{t - c_l}, \, l = 1,2, \ldots, s

then the co-location method gives

\displaystyle a_{j,i} \triangleq \int_0^{c_j} \frac{q_i(\tau)}{q_i(c_i)}\,{\mathrm d}\tau
\displaystyle b_{j} \triangleq \int_0^{1} \frac{q_i(\tau)}{q_i(c_i)}\,{\mathrm d}\tau

For s = 1 we have 2x - 1 = 0 and thus c_1 = 1/2 and the Butcher tableau is

\displaystyle \begin{array}{c|c} 1/2 & 1/2\\ \hline & 1 \\ \end{array}

that is, the implicit RK2 method aka the mid-point method.

For s = 2 we have 6x^2-6x+1 = 0 and thus c_1 = \frac{1}{2} - \frac{1}{2\sqrt{3}} and c_2 = \frac{1}{2} + \frac{1}{2\sqrt{3}} and the Butcher tableau is

\displaystyle \begin{array}{c|cc} \frac{1}{2} - \frac{1}{2\sqrt{3}} & \frac{1}{4} & \frac{1}{4} - \frac{1}{2\sqrt{3}}\\ \frac{1}{2} + \frac{1}{2\sqrt{3}} & \frac{1}{4} + \frac{1}{2\sqrt{3}} & \frac{1}{4}\\ \hline & \frac{1}{2} & \frac{1}{2}\\ \end{array}

that is, the implicit RK4 method.

Implicit RK2


\displaystyle y_{n+1} = y_n + h b_1 k_1


\displaystyle k_1 = f\left( t_n + c_1 h,\ y_{n} + h a_{11} k_1 \right)

Substituting in the values from the tableau, we have

\displaystyle y_{n+1} = y_n + h k_1


\displaystyle k_1 = f\left( t_n + \frac{1}{2} h,\ y_{n} + h \frac{1}{2} k_1 \right)

and further inlining and substitution gives

\displaystyle y_{n+1}=y_n+hf(t+\frac{h}{2},\frac{1}{2}(y_n+y_{n+1}))

which can be recognised as the mid-point method.


A common package for solving ODEs is gsl which Haskell interfaces via the hmatrix-gsl package. Some of gsl is coded with the conditional compilation flag DEBUG e.g. in msbdf.c but sadly not in the simpler methods maybe because they were made part of the package some years earlier. We can add our own of course; here’s a link for reference.

Let’s see how the Implicit Runge-Kutta Order 2 method does on the following system taken from the gsl documenation

\displaystyle \frac{{\mathrm d}^2u}{{\mathrm d} t^2} + \mu \frac{{\mathrm d}^u}{{\mathrm d} t} (u^2 - 1) + u = 0

which can be re-written as

\displaystyle \begin{aligned} \frac{{\mathrm d}y_0}{{\mathrm d} t} &= y_1 \\ \frac{{\mathrm d}y_1}{{\mathrm d} t} &= -y_0 - \mu y_1 (y_0 y_0 - 1) \end{aligned}

but replacing gsl_odeiv2_step_rk8pd with gsl_odeiv2_step_rk2imp.

Here’s the first few steps

rk2imp_apply: t=0.00000e+00, h=1.00000e-06, y:1.00000e+00 0.00000e+00 
-- evaluate jacobian
(  2,  2)[0,0]:                      0
(  2,  2)[0,1]:                      1
(  2,  2)[1,0]:                     -1
(  2,  2)[1,1]:                     -0
YZ:1.00000e+00 -5.00000e-07 
-- error estimates: 0.00000e+00 8.35739e-20 
rk2imp_apply: t=1.00000e-06, h=5.00000e-06, y:1.00000e+00 -1.00000e-06 
-- evaluate jacobian
(  2,  2)[0,0]:                      0
(  2,  2)[0,1]:                      1
(  2,  2)[1,0]:   -0.99997999999999998
(  2,  2)[1,1]: 1.00008890058234101e-11
YZ:1.00000e+00 -3.50000e-06 
-- error estimates: 1.48030e-16 1.04162e-17 
rk2imp_apply: t=6.00000e-06, h=2.50000e-05, y:1.00000e+00 -6.00000e-06 
-- evaluate jacobian
(  2,  2)[0,0]:                      0
(  2,  2)[0,1]:                      1
(  2,  2)[1,0]:  -0.999880000000002878
(  2,  2)[1,1]: 3.59998697518904009e-10
YZ:1.00000e+00 -1.85000e-05 
-- error estimates: 1.48030e-16 1.30208e-15 
rk2imp_apply: t=3.10000e-05, h=1.25000e-04, y:1.00000e+00 -3.10000e-05 
-- evaluate jacobian
(  2,  2)[0,0]:                      0
(  2,  2)[0,1]:                      1
(  2,  2)[1,0]:  -0.999380000000403723
(  2,  2)[1,1]: 9.6099972424212865e-09
YZ:1.00000e+00 -9.35000e-05 
-- error estimates: 0.00000e+00 1.62760e-13 
rk2imp_apply: t=1.56000e-04, h=6.25000e-04, y:1.00000e+00 -1.56000e-04 
-- evaluate jacobian
(  2,  2)[0,0]:                      0
(  2,  2)[0,1]:                      1
(  2,  2)[1,0]:  -0.996880000051409643
(  2,  2)[1,1]: 2.4335999548874554e-07
YZ:1.00000e+00 -4.68500e-04 
-- error estimates: 1.55431e-14 2.03450e-11 

Let’s see if we can reproduce this in a fast and loose way in Haskell

To make our code easier to write and read let’s lift some arithmetic operators to act on lists (we should really use the Naperian package).

> module RK2Imp where
> instance Num a => Num [a] where
>   (+) = zipWith (+)
>   (*) = zipWith (*)

The actual algorithm is almost trivial

> rk2Step :: Fractional a => a -> a -> [a] -> [a] -> [a]
> rk2Step h t y_n0 y_n1 = y_n0 + (repeat h) * f (t + 0.5 * h) ((repeat 0.5) * (y_n0 + y_n1))

The example Van der Pol oscillator with the same parameter and initial conditions as in the gsl example.

> f :: Fractional b => a -> [b] -> [b]
> f t [y0, y1] = [y1, -y0 - mu * y1 * (y0 * y0 - 1)]
>   where
>     mu = 10.0
> y_init :: [Double]
> y_init = [1.0, 0.0]

We can use co-recursion to find the fixed point of the Runge-Kutta equation arbitrarily choosing the 10th iteration!

> y_1s, y_2s, y_3s :: [[Double]]
> y_1s = y_init : map (rk2Step 1.0e-6 0.0 y_init) y_1s
> nC :: Int
> nC = 10
> y_1, y_2, y_3 :: [Double]
> y_1 = y_1s !! nC

This gives

ghci> y_1

which is not too far from the value given by gsl.

y:1.00000e+00 -1.00000e-06 

Getting the next time and step size from the gsl DEBUG information we can continue

> y_2s = y_1 : map (rk2Step 5.0e-6 1.0e-6 y_1) y_2s
> y_2 = y_2s !! nC
ghci> y_2

gsl gives

y:1.00000e+00 -6.00000e-06
> y_3s = y_2 : map (rk2Step 2.5e-5 6.0e-6 y_2) y_3s
> y_3 = y_3s !! nC

One more time

ghci> y_3

And gsl gives

y:1.00000e+00 -3.10000e-05


The nix-shell which to run this is here and the C code can be built using something like

gcc ode.c -lgsl -lgslcblas -o ode

(in the nix shell).

The Haskell code can be used inside a ghci session.


by Dominic Steinitz at March 04, 2018 01:59 PM

March 03, 2018

Mateusz Kowalczyk

Trying out GHC compact regions for improved latency (Pusher case study).

Posted on March 3, 2018 by Fūzetsu

(I have rambled yesterday about .Internal modules in libraries and why you should use them but I messed up the RSS feed: if you haven’t seen the post but are a library author, you might want to check it out here.)

A couple of years ago, Pusher published a quite good blog post about how GHC garbage collection time pauses scale with size of the working set. I will not re-iterate the blog post, you should go read it.

Ultimately in the post the authors went to StackOverflow for help/confirmation of their findings.

What was of particular interest to me is that the accepted answer mentions an up-coming feature called compact regions which could help. As it happens, today’s the future and the feature has been in GHC since 8.2.x. Very roughly, it allows you to put data in a contiguous memory region with some restrictions. One of these restrictions is that this data can not have outgoing pointers. This is very useful because if it has no outgoing pointers then we know it’s not holding onto any GC roots and doesn’t have to be traversed or copied by the GC. I very highly recommend that you read the paper: I was certainly very impressed. I also didn’t know the original use-case was to send unserialised data over network and GC latency seems a bit more secondary. There are more cool things you can do like write the region out to disk and read it back later which can potentially save you a lot of time that you might have had to spend on (de)serialisation: very inefficient if you’re running the binary multiple times.

Let’s try using compact regions for Pusher’s use-case and check if it can help. This is an exploratory exercise for me, it could fail horribly and I will be discovering things as we go along.

Replicating reported results

Pusher’s reduced code was as follows.

On my machine this gives me the following numbers:

Well, these aren’t quite the same. I’m using GHC 8.2.2 while the author used 7.10.2. I also guess that my machine might be faster. That’s okay, I will just increase the maximum map size to 500000 elements and double the number of messages.

Okay, I get a lot more total memory used but the pause is now very similar to original problem. Let’s work with this.

One important thing I want to mention is that the above code is obviously somewhat contrived and not optimal. Better (faster, less latency) answers were given on the SO answer but it’s not the point of this exercise.

First attempt at compacting data

Let’s use the compact package to put the working set (message map) in a compact region and see what happens. On first attempt we get an error:

Well, that’s a problem. Haddock helpfully informs us why

Pinned ByteArray# objects cannot be compacted. This is for a good reason: the
memory is pinned so that it can be referenced by address (the address
might be stored in a C data structure, for example), so we can't make
a copy of it to store in the Compact.

Okay, let’s use ShortByteString instead which doesn’t use pinned memory.

Due to the additional overheads our pauses are through the roof. Let’s go back to original numbers of 200k and million messages:

Linking Main_short ...
   4,052,460,128 bytes allocated in the heap
   4,110,484,224 bytes copied during GC
     224,033,704 bytes maximum residency (18 sample(s))
      49,079,384 bytes maximum slop
             700 MB total memory in use (0 MB lost due to fragmentation)

                                     Tot time (elapsed)  Avg pause  Max pause
  Gen  0      4477 colls,     0 par    0.734s   0.735s     0.0002s    0.0717s
  Gen  1        18 colls,     0 par    0.002s   0.002s     0.0001s    0.0002s

  INIT    time    0.000s  (  0.000s elapsed)
  MUT     time    0.416s  (  0.416s elapsed)
  GC      time    0.736s  (  0.736s elapsed)
  EXIT    time    0.014s  (  0.014s elapsed)
  Total   time    1.166s  (  1.166s elapsed)

  %GC     time      63.1%  (63.1% elapsed)

  Alloc rate    9,746,445,658 bytes per MUT second

  Productivity  36.9% of total user, 36.9% of total elapsed

OK, let’s work with this.

Second attempt at compacting data

Now that we’re back to original figures (ish) we can actually compact data. Let’s try a naive approach: stick the whole map into a compact region:

Let’s run:

We completely eliminated the latency due to GC! We don’t really hold to any real data anymore so all the collections are extremely quick. What’s the catch?

Well, few things.

  1. Our program is twice as slow. We sacrificed throughput for latency. This was OK for Pusher guys but 2x is quite poor. If we could go faster that would be nice.
  2. Our total memory used is through the roof: 4x increase with 10x increase in resident memory. We’re only ever appending data to the region, never getting rid of data within it. This means our region is holding onto all 1000000 messages.

Can we try to do something more reasonable? What if we didn’t have a million messages but unbounded amount?

Memory usage

How do we lower the memory usage? We know where it’s coming from: the data in the region is never traced, never copied and therefore nothing is ever freed. The GC does not look in the region at all. To free the data in a region, we have to copy the live data we’re interested it out to a different region and free the region itself. This is pretty much exactly what GHC’s GC does so it’s like invoking the garbage collector manually for a region.

In order to copy out data to a new region, we simple use compact on the value and return the new region, letting the old region be freed. However we can’t (or rather, shouldn’t) do this on every new message: that’s like running garbage collection on every message that comes in. Notably, this is bad and I’m not even patient enough to wait for it to terminate:

You may think we can improve this: we only need to collect when we throw away old data, right?

Well, this is slightly better but only by some constant factor: after the buffer fills, we’re back to copying all the time.

What about a slightly more clever solution? We can specify how much memory we want to use and when we want to copy the data into a new region. That is, simply copy every n number of messages. In the worst case, we have 200k messages in buffer + n removed messages. This gives us more fine grained control lover exactly how much space we’re willing to allocate (at most) and how often copying happens. Most importantly, it allows us to reliably copy a known amount of data (up to a bound) which we can directly correlate to GC pause times! I’m going to use the fact that message IDs are sequential but you could trivially maintain own counter instead. Let’s make the code copy to a new region every n messages with n being configurable at command line.

This means that for any n we give the program, it’s going to copy the data out into a new region. Let’s run it for every n from 5000 until 1005000: I’m not running it for less than 5000 because I don’t want to wait through all the copying and I’m running it over 1000000 to show a case where no copy happens.

<figure> varyingNlargerange<figcaption>varyingNlargerange</figcaption> </figure>

Here is a second run for smaller range of values to cut off the extremes (100k - 700k):

<figure> varyingNsmallrange<figcaption>varyingNsmallrange</figcaption> </figure>

Here is a third run between 1k and 100k as this seems to be quite an interesting region, intervals of 1000:

<figure> varyingNtinyrange<figcaption>varyingNtinyrange</figcaption> </figure>

And finally a smallest run between 40k and 80k as this seems to be somewhat of an optimal range for our example.

<figure> varyingNsmallestrange<figcaption>varyingNsmallestrange</figcaption> </figure>


From the charts, we see that there are areas for this particular use-case where we can keep GC pauses very low. The original requirement was 10ms. Original SO question demonstrated 50ms GC pauses. In the charts we can see many areas below 5ms and ranges where it’s 1-2ms while maintaining relatively low residency and about the same factor of slowdown as when we never collected. On the extreme side where we stored all messages in the region, our pauses go to ~0: we don’t have any real data to traverse so GC doesn’t pause for long.

The throughput is poor. 2x-3x times slower. Part of the slowdown is artificial: I had to make things work with ShortByteStrings, I’m creating those from ByteString &c.

The main issue with this case study is that the buffer of messages changes and we have to manage it, balancing memory vs latency vs throughput. Compact regions really shine when you have a chunk of long-lived data that you want to access throughout your program. I would say that using it with data that’s changing frequently is a poor use. Even then, I’m actually pleasantly surprised: if you can take the 2-3x throughput hit but really care about latency, there is now a light in the tunnel for you. There is also additional memory cost but in a lot of cases if you’re willing to trade throughput, you probably can afford additional residency too.

Of course the very first thing to do would be to try to make-do without this: compact regions are not magic and they have costs and restrictions. They seem to be very good at what they do (based on numbers from the paper) but they are not the silver bullet. If you can exercise some smarter programming to get by, you should do that first.

Lastly, my benchmarks are very poor, I sample each N just once. Some of them were done multiple times as the charts generated were from separate runs. I would not use anything you see here as scientific fact nor a template for how to use compact regions. This was merely me checking out the feature and trying to apply it to an example I knew. I think if I were to re-do this post, I would take the time to set up much more comprehensive benchmarking with multiple versions of code side-by-side, pretty metrics &c.

In the future if I have an application with long-lived mostly-static data, I will certainly keep compact regions in mind.


I used the following code to generate the chart seen earlier. This is just throw-away code so it’s just here for completion.

module Main (main) where

import Control.Monad
import Graphics.Rendering.Chart
import Graphics.Rendering.Chart.Backend.Cairo (renderableToFile)
import Graphics.Rendering.Chart.Easy
import Graphics.Rendering.Chart.Grid
import System.Process

data Stats = Stats
  { nVal :: !Int
    -- | MBs
  , residency :: !Double
  , maxPause :: !Double
  , totalTime :: !Double
  } deriving Show

main :: IO ()
main = do
  -- GHC RTS options do not allow you to see max pause in a nice
  -- format. GHC.Stats doesn't return it. It's not in -t
  -- --machine-readable output either. Just "parse" -s output, this is
  -- a throw-away program anyway.
  times <- forM [5000, 10000 .. 1005000] $ \i -> do
    (_, _, out) <- readProcessWithExitCode "/tmp/Main_short" [show i, "+RTS", "-s"] ""
    let ls = lines out
        res = read . filter (/= ',') . head . words $ ls !! 2
        pause = read . filter (/= 's') . last . words $ ls !! 7
        totTime = read . filter (/= 's') . (!! 4) . words $ ls !! 14
    print i
    pure $! Stats
      { residency = fromIntegral (res :: Int) / 1024 / 1024
      , maxPause = pause
      , totalTime = totTime
      , nVal = i
  void $ renderableToFile def "/tmp/results.png" $ fillBackground def $ gridToRenderable $
    let title = setPickFn nullPickFn $ label def HTA_Centre VTA_Centre "Effect of copies for different N"
    in title `wideAbove` aboveN
         [ aboveN
           [ layoutToGrid $ mkChart "residency" "MiB" red [ (nVal s, residency s) | s <- times ]
           , layoutToGrid $ mkChart "max pause" "sec" green [ (nVal s, maxPause s) | s <- times ]
           , layoutToGrid $ mkChart "total time" "sec" blue [ (nVal s, totalTime s) | s <- times ]

  :: String
  -> String
  -> Colour Double
  -> [(Int, Double)]
  -> Layout Int Double
mkChart lbl units c vs = execEC $ do
  layout_y_axis . laxis_title .= units
  layout_x_axis . laxis_title .= "Messages before compact copy."
  layout_plots .=
    [ toPlot $ plot_lines_title .~ lbl
             $ plot_lines_values .~ [vs]
             $ plot_lines_style . line_color .~ opaque c
             $ def

March 03, 2018 12:00 AM

March 02, 2018

Mateusz Kowalczyk

A case for .Internal modules.

Posted on March 2, 2018 by Fūzetsu

Many libraries expose .Internal modules: these are module that contain functions that are unsafe, unstable, with no guarantees and quite often, fast. This is great if you Know What You Are Doing™.

Sadly, not all package authors seem to do this. Part of the motivation is that in principle, it shouldn’t be needed. Indeed this is what I was told by a co-worker on this seemingly unrelated rules_haskell issue. After a brief chat on Slack however, I was able to convince him otherwise. Below is a paraphrasing of my arguments and reason why I think .Internal modules are a good thing and you should be exposing them. Find below some ramblings on why you should expose internals of your libraries.

Expose the internals, please.

  1. I love your library but it can’t cover all my use-cases. It’s wrong for it to try. Please let me use the back doors.

    This is extremely common. I’m using your library and it’s going great. I write my program and it runs fast. I look through perf report and GHC’s profiling output. It could be faster. I know how to make it faster. But I can’t make it faster because I’m relying on your library which insists on taking the slow route.

    This is rarely the fault of the library. Libraries provide abstractions, data types, functions on those types and give us guarantees. To preserve the invariants, it needs to protect itself from the dumb users.

    I will give the same example that I gave when I initially discussed this. I was writing a small program recently and using a min priority queue. I was also tracking the maximal element that was inserted: something the library could not offer me in constant time and nor should it. When I wanted to insert this maximal element into the queue however, I have a problem: the library will perform linear in the size of the queue number of comparisons; it has to know where to insert the element after all. So I created an issue in pqueue library to let me do this. The easiest way is to expose the data type constructors in an .Internal module and let me do the traversal myself without doing the comparisons.

    Notably I want to stress that this is of no fault of the library: inserting arbitrary element in arbitrary place breaks the queue. It is only with my external knowledge that I am able to insert it safely.

    Could better library design have stopped this? I want to argue that no if we only have finite amount of time to spend. If we were using something like Agda or even just more esoteric parts of Haskell, we could actually improve the library to expose such a function. We could have insertMaximal that requires proof that the element is maximal. I could produce it. The library is not about providing this, would likely make the common case API more awkward and is generally probably not the best use of the author’s time. Just give me the constructors, please.

  2. I love your library and you have a function that does the exact thing I need already but… it’s not exported.

    Very frequently library authors will implement certain things for their own convenience and only export a nice API to the user. This is OK if the API works for you but if something is missing, it’s incredibly frustrating to find that the exact code you need already exists. Frequently it’s not even unsafe code. I could give another pqueue example but let’s look at a different one for variety.

    text-show is a package that provides a Show-like type class that instead of going to String or String builder (String -> String), uses Text instead. This is great because String is the devil and you shouldn’t be using it. Recently I found myself wanting to save some memory so I reached for ShortByteString which provides lower overhead than ByteString. The bytes I was storing were really just the ASCII character set and their raison d’être was to be printed to the terminal later if the user so requested. Can text-show help us? Sure, it has a module with TextShow instance for ShortByteString. Oh, but hold on. It’s not quite what I want!

    It’s printing quotes around my content. I didn’t want that. If we look at the source, we can find that it defines unpackChars :: ShortByteString -> [Char] which does exactly what we need but then the instance looks like this:

    Of course to my dismay, it does not export unpackChars, only the instance which is useless to me. My choice is to either copy and paste unpackChars into my own code or deal with it in different way such as going to ByteString and using a text builder function ByteString -> Builder which doesn’t wrap with quotes and already exists.

    To add insult to the injury, unpackChars actually already exists in the bytestring package. Worse, bytestring even exposes .Internal modules already! Sadly the author decided to not expose that. This means text-show had to copy and paste the implementation and subsquentely so would I until someone had the foresight to export it. At least it is possible to copy and paste this code because we have enough access to internal to re-implement it.

    Another offender of “does exactly what you want but it’s inside a typeclass behind something you don’t want” that I encountered recently is store. Store instances for Vector are generated through TH and they store vector length followed by the data. But I already knew the length from external source and just wanted the data. Solution? Well, two:

    1. newtype Vector passing the length through a type parameter with GHC.TypeLits or otherwise then retrieve the value in the implementation. Yuck and also slow (goes through Integer &c.)

    2. Don’t use Store at all. Thankfully, store-core exists (thank you!) which lets you work on Peek directly. You still have to copy -ddump-splices to see the generate TH then copy and paste the part you’re interested in. After adapting it a bit, I have a bunch of these things in my code for multiple vector types

      Would be much nicer if this was generated, exported and only then used in the typclass. I understand that this is much more effort on store authors however so this one is more wishful thinking if anything.

    To conclude this point, please think of the users and expose non-trivial internal functions that may be useful to anyone. Stick them in .Internal with no guarantees, mark them INLINE if you’re relying on it, just let us use them.

  3. I’m still going to get access to internals if I want to. Please don’t make me do the extra work.

    If I need the internals and they are not expose, here are my options.

    • Clone the package, hack the source to expose what I need, use it.
    • Ask author to do it. Wait and hope they do it and quick. In meantime, I’m probably forced to do a fork anyway.
    • Don’t do anything but simply use existing API. Cry over performance you’re losing while the code you need to go omgfast is right there.
    • If the function is small and self-contained, copy and paste all the code needed to make it work. Often not an option unless you want to copy whole package because likelyhood is that internals that you need are also not expose. This is basically a less drastic fork if you’re lucky, a full fork if you’re not. Downside is that it’s now your code to live with.
    • You could also complain to author that the library API is not good enough and it should do the thing you want it to do. Sometimes you may even be right! More often than not, if you’re looking for internals, it probably doesn’t belong in the API to start with and redesigning the library to include your usecase is unreasonable/impossible/not cost effective.

    It’s better for everyone if I can access internals. I’m happy because I get to do what I want. The author is happy because I’m not bothering him on the issue tracker and I’m talking about how fast and awesome and accessible the library is.

  4. I want to use your library right now but it’s doing something poorly.

    This usually warrants a fork/pull request anyway but sometimes it really may be the case that a library is doing mostly what I want but some things are poorly designed or implemented. I could workaround this on the spot and send a pull request later but if you’re not exposing internals then I’m out of luck. Either I am forced to fork it or use another library. If short on time and alternative exists, latter sounds like a better option even if the original library is much nicer in general.

About Haddock for .Internal modules.

A side-point I want to mention is something that few packages like to do that I really hate. Let’s use bytestring as an example. Let’s look at the main page for the package. .Internal modules appear but have no documentation! Indeed often you can’t even tell that the package exposes internals. Sometimes these modules are even documented but a Haddock pragma is used to hide them.

Please stop doing this. It’s just a massive inconvenience. I have to first look through the cabal file to check what modules are really exposed. Then I have to source dive to see what functions are available. I have to check which are actually exported. You as an author lose the opportunity to plaster with massive letters at the top of the module that this is an internal module with no guarantees.

If you don’t want to document your .Internal module that’s up to you but at the very least don’t make me look through source every time I want to check something. “You’re forced to looked at source so you have to see what the functions are actually doing” is not a good argument. I probably already did and you’re not giving me any guarantees for the module anyway so there’s no need to make it a pain to actually use it.


Next time you release a library, please consider exposing internals one way or another. You can keep your usual hidden module hierarchy if you want, just make a kitchen-sink module that re-exports those. Exposing typeclasses that nearly do what one might want but not quite and not exposing the implementations also doesn’t count. If your implementation looks like

expose someFuncImpl.

Last remark is that I am not against hidden modules. I am against using them to hide useful code. You can still use hidden modules for organising your project in a way you find convenient but I would very much appreciate it if you could expose all that useful code through some means in the end.

March 02, 2018 10:20 PM

Douglas M. Auclair (geophf)

February 2018 1HaskellADay problems and solutions

by geophf ( at March 02, 2018 05:46 AM

Manuel M T Chakravarty

Demystifying functional programming and what that means for learning & teaching

In March, I will talk about Demystifying functional programming and what that means for learning & teaching in four different cities: March 6 – Sydney / March 8 – Melbourne / March 13 – Perth / March 22 – Brisbane. This talk will outline the thinking behind my Welcome to FP workshop at YOW! Lambda Jam 2018, which will be a general introduction to functional programming.

March 02, 2018 01:47 AM

Mateusz Kowalczyk

GHC can output profiling information as JSON and you should use it.

Posted on March 2, 2018 by Fūzetsu

This is just a small PSA. As of GHC 8.2, when GHC is able to accept -pj RTS flag which will output things in a nice machine readable format. So, use it. I’d even be inclined to say that you should always use it and if you want the old format, we can have a tool that can produce that.

Yesterday I released two packages with relation to this. First is just a very simple and naive package that reads the output: ghc-prof-aeson. It’s just about what you’d expect, type definition then aeson deriving. I have thrown in Linux, OSX and Windows CI as a bonus.

The second package is ghc-prof-aeson-flamegraph. This one is similar to ghc-prof-flamegraph except that it doesn’t bundle FlameGraph tool and that I didn’t have to try to impose additional dependencies on the existing package. It was also not completely straight-forward to integrate with it. Primarily this package is motivated by this long-standing issue where parsing the profile would just fail.

The -p and -P &c. profiling output format is not reliable. If you’re doing retainer profiling, you now have all this extra stuff in your profile. Just use -pj and save everyone’s sanity.


Running on a random profile I had lying around. Generated with +RTS -pj.

[shana@lenalee:~/programming/ghc-prof-aeson-flamegraph]$ nix run nixpkgs.stack nixpkgs.perl.FlameGraph
[shana@lenalee:~/programming/ghc-prof-aeson-flamegraph]$ stack exec --no-nix-pure -- bash -c 'cat /tmp/ | ghc-prof-aeson-flamegraph | > /tmp/secret.svg'

Censored to protect the innocent.

<figure> censored_prof<figcaption>censored_prof</figcaption> </figure>


You should start using -pj if you’re using GHC profiling. Better, you should (re)write tools to work with this format instead of the awful ad-hoc parsing that existing tools do.

March 02, 2018 12:00 AM

March 01, 2018

Theory Lunch (Institute of Cybernetics, Tallinn)

A crash course in subadditivity, part 1

Today, the 1st of March 2018, I gave what ended up being the first of a series of Theory Lunch talks about subadditive functions. The idea is to give an introduction to the subject, following Hille’s and Lind and Marcus’s textbooks, and stating an important theorem by the Hungarian mathematician Mihály Fekete; then, discuss some extensions to the case of many variables and their implications in the theory of cellular automata, referring to two of my papers from 2008, one of them with Tommaso Toffoli and Patrizia Mentrasti.

Let’s start from the beginning:

Definition 1. Let (S, \cdot) be a semigroup. A function f : S \to \mathbb{R} is subadditive if:

f(x \cdot y) \leq f(x) + f(y)      for every x, y \in S                (1)


If S is a group, with an identity element e and a multiplicative inverse x^{-1} for every element x, then (1) is equivalent to:

f(x) \geq f(y) - f(x^{-1} \cdot y)        for every x, y \in S

Usually, we will have S be one of the sets \mathbb{Z} and \mathbb{R} of integers and reals, respectively, or one of the sets \mathbb{Z}_{+} and \mathbb{R}_{+} of positive integers and positive reals, respectively. All these sets will be considered as semigroups with respect to addition.

Examples of subadditive functions are:

  1. The Heaviside function H : \mathbb{R} \to \mathbb{R} defined by H(x)=1 if x \geq 0 and H(x)=0 if x<0. This function is subadditive, because if x and y are both negative, then the left-hand side of (1) is 0, and if one of them is nonnegative, then the right-hand side is either 1 or 2.
  2. Let U \subseteq S and let f : S \to \mathbb{R} be defined by f(x)=1 if x \in U and f(x)=2 if x \not \in U. Then f is subadditive, because the left-hand side of (1) is either 1 or 2, and the right-hand side is either 2, 3, or 4. For S=\mathbb{R} and U=\mathbb{Q} this shows that a subadditive function can be discontinuous at every point.
  3. Let A be a finite nonempty set and let A^\ast be the free monoid on A, that is, the set of words on A with concatenation as the binary operation and the empty word \lambda as the identity element. The length of a word is a subadditive (actually, additive) function on A^\ast.

If S is a subsemigroup of either \mathbb{R} or \mathbb{Z}, a useful trick is that f(x) is subadditive on -S if and only if f(-x) is subadditive on S. This, for example, allows to “dualize” proofs on \mathbb{R}_{+} to make them work on \mathbb{R}_{-}, the additive semigroup of negative reals.

Fekete’s lemma. Let f : S \to \mathbb{R} be a subadditive function.

  1. If S=\mathbb{R}_{+} or S=\mathbb{Z}_{+}, then \ell_{+} = \lim_{x \to +\infty} f(x)/x = \inf_{x > 0} f(x)/x.
  2. Dually, if S=\mathbb{R}_{-} or S=\mathbb{Z}_{-}, then \ell_{-} = \lim_{x \to -\infty} f(x)/x = \sup_{x < 0} f(x)/x.
  3. Finally, if S=\mathbb{R} or S=\mathbb{Z}, then \ell_{-} \leq \ell_{+}, and both are finite.

Note that \ell_{+} cannot be +\infty, but can be -\infty: for example, f(x) = -x^2 is subadditive on \mathbb{R}_{+} and \lim_{x \to +\infty} f(x)/x = -\infty. Dually, \ell_{-} cannot be -\infty, but can be +\infty. Note that f(x)/x itself needs not be subadditive: for example, f(x)=-x is subadditive on \mathbb{R}_{+}, but f(x)/x=-1 is not.

Proof of point 1 with S = \mathbb{Z}_{+}: Fix a positive integer t. Every positive integer x large enough can be written in the form x = qt + r with q positive integer and (attention!) 1 \leq r \leq t. By subadditivity,

\dfrac{f(x)}{x} \leq \dfrac{f(qt) + f(r)}{x} \leq \dfrac{q}{x} f(t) + \dfrac{f(r)}{x}.

But by construction, \lim_{x \to \infty} q/x = 1/t: since there are no more than r possible values for f(r), by taking the upper limits we get

\limsup_{x \to +\infty} \dfrac{f(x)}{x} \leq \dfrac{f(t)}{t}.

This holds for every positive integer t, so we can conclude:

\limsup_{x \to \infty} \dfrac{f(x)}{x} \leq \inf_{x > 0} \dfrac{f(x)}{x} \leq \liminf_{x \to \infty} \dfrac{f(x)}{x}.


A key ingredient of the proof is that f is bounded on \{1, \ldots, t\}. The argument can be modified to work on \mathbb{R}_{+}, but requires that f be bounded in every closed and bounded interval of the form [1,t+1): this is actually true if f is subadditive, but proving this fact would go beyond the scope of our talks.

An immediate consequence of Fekete’s lemma is that, as it was intuitively true from the definition, a subadditive function defined on \mathbb{R}_{+} or \mathbb{Z}_{+} can go to +\infty for x \to +\infty at most linearly. On the other hand, an everywhere negative subadditive function defined on positive reals or positive integers can go to -\infty for x \to +\infty arbitrarily fast. Indeed, the following holds:

Lemma 1. Let S be either \mathbb{R}_{+} or \mathbb{Z}_{+} and let f, g : S \to \mathbb{R}. If f is negative and subadditive and g is positive and nondecreasing, then f(x)g(x) is subadditive (and negative).

Proof: If a<0 and 0 < b \leq c, then ac \leq ab. Then the following chain of inequalities hold:

f(x+y) g(x+y) \leq \left( f(x) + f(y) \right) g(x+y) \leq f(x) g(x) + f(y) g(y).


Hence, if f(x) = -x and g(x) is positive and nondecreasing, then f(x)g(x) is subadditive and |f(x)g(x)| = \Omega(g(x)).

To see an application of Fekete’s lemma in the context of the theory of dynamical system, we introduce the notions of subshift and of cellular automaton. We will first do so in dimension 1, then expand to arbitrary dimension in later talks.

Definition 2. Let A be a finite nonempty set. A subset X of the set A^\mathbb{Z} of bi-infinite words is a (one-dimensional) subshift if there exists a set of forbidden words \mathcal{F} \subseteq A^\ast such that the following holds: for every x \in A^\mathbb{Z}, it is x \in X if and only if for no i \in \mathbb{Z} and n \in \mathbb{Z}_{+} it is x_i \ldots x_{i+n-1} \in \mathcal{F}. The set \mathcal{L}(X) of the words over A which appear in elements of X is called the language of the subshift X. \Diamond

Examples of subshifts are:

  1. The full shift X = A^\mathbb{Z}. In this case, \mathcal{F} = \emptyset.
  2. The golden mean shift on the binary alphabet, where \mathcal{F} = \{ 11 \}.

Let X be a subshift on A and let u and v be words on A of length m and n, respectively. If uv is an allowed word for X, then so must be u and v: that is, there cannot be more allowed words of length m+n than juxtapositions of an allowed word of length m and an allowed word of length n. Switching to logarithms, we see that

f(n) = \log |\mathcal{L}(X) \cap A^n|

is a subadditive function of the positive integer variable n: Fekete’s lemma then tells us that

h(X) = \lim_{n \to \infty} \dfrac{\log |\mathcal{L}(X) \cap A^n|}{n}                (2)

exists. The quantity (2) is called the entropy of the subshift X, and is a measure of the quantity of information it can convey.

(As a funny note, the use of the uncapitalized letter h to indicate entropy apparently originates from Claude Shannon, after John von Neumann suggested that he called “entropy” a similar information-theoretical quantity. Shannon decided to uncapitalize the letter H used by Ludwig Boltzmann… which, however, was a capitalized \eta.)

Definition 3. A one-dimensional cellular automaton is a triple \mathcal{A} = \langle Q, r, \delta \rangle where Q is a finite nonempty set of states, r is a nonnegative integer radius, and \delta : Q^{2r+1} \to Q is a local update rule. \Diamond

A cellular automaton \mathcal{A} = \langle Q, r, \delta \rangle induces a global transition function G : Q^\mathbb{Z} \to Q^\mathbb{Z} by synchronous update:

G(x)_i = \delta(x_{i-r} \ldots x_{i+r})        for every i \in \mathbb{Z}                (3)

If G is the global transition function of a cellular automaton with set of states Q, then G \left( Q^\mathbb{Z} \right) is a subshift. Not every subshift can be obtained this way: those that can, belong to the special class of sofic shifts, a term suggested by Benjamin Weiss coming from the Hebrew word for “finite”.

In the upcoming talk (or talks) we will examine the case of several variables, and correspondingly, subshifts and cellular automata in higher dimension. In particular, we will discuss a generalization of Fekete’s lemma to arbitrarily many positive integer variables.


  1. Silvio Capobianco. Multidimensional cellular automata and generalization of Fekete’s lemma. Discrete Mathematics and Theoretical Computer Science 10 (2008), 95–104.
  2. Einar Hille. Functional Analysis and Semigroups. American Mathematical Society, 1948.
  3. Douglas Lind and Brian Marcus. An Introduction to Symbolic Dynamics and Coding. Cambridge University Press 1995.
  4. Tommaso Toffoli, Silvio Capobianco, and Patrizia Mentrasti. When—and how—can a cellular automaton be rewritten as a lattice gas? Theoretical Computer Science 403 (2008), 71–88.

by Silvio Capobianco at March 01, 2018 06:16 PM

Brent Yorgey

Test your intuition: logarithmic time

Here is a question to test your intuition for logarithms. NO CALCULATORS allowed, and don’t take a long time to work out an answer in your head! Just go with your gut. After you have committed to a choice (say, by writing it down), then go get a calculator and a pencil and see if you can work out whether you were right!

On a certain input, an O(n) algorithm runs in one-tenth of a second. On the same input, an O(n^2) algorithm takes one and a half weeks to run. Approximately how long would an O(n \log n) algorithm take on the same input?

  1. a few seconds?
  2. a few minutes?
  3. a few hours?
  4. a few days?

I’m pretty sure it wasn’t until quite a few years out of undergrad that I would have gotten this right.

by Brent at March 01, 2018 03:55 AM

February 28, 2018

FP Complete

10 Common Mistakes to Avoid in FinTech Software Development

Financial Technology, or FinTech, is a relatively new aspect of the financial industry, which focuses on applying technology to improve financial activities. This has the potential to open the doors to new kinds of applications and services for customers, as well as more competitive financial technology.

by Michael Snoyman ( at February 28, 2018 08:13 PM

February 27, 2018

Neil Mitchell

Switching to HTTPS

Summary: All my domains are now on HTTPS. In this post I describe how I did it.

It's quite clear everyone should be moving their domains to HTTPS, or face the consequences. I have recently converted three domains to HTTPS - two static sites and one Haskell server. Converting was mostly a case of finding the right how-to guide and following it, so in this post I'll link to the "right" guides.

Static Websites

I have static domains at and, both of which follow a similar pattern, so I'll focus on the Shake website. The steps are:

  • Get a domain name: I bought a domain name from Soho UK, who later sold up and became Cloud Mega. I've been using them for websites for a very long time, and never had any complaints.
  • Write some content: My static websites are based of source material that is then converted via custom scripts to generate the final website. For Shake, the source is Markdown files and the converter is a Haskell program. In the case of Shake, I use the markdown package with custom tricks like hyperlinking all identifiers (see these code samples). After running the program on the Markdown files I have HTML/CSS that can be served directly.
  • Serve the content: I host and serve the content using GitHub Pages, which lets you either serve content off the branch gh-pages or a separate GitHub repo - I use the latter option. I then use the custom domain name feature to make requests to serve from GitHub Pages over HTTP.
  • Serve with HTTPS: The previous steps get us an HTTP website, but last weekend I did the work to get to HTTPS. I followed these instructions, which use Cloudflare as an intermediary - serving over HTTPS and providing a cache. I have configured things to always redirect away from the www and always use HTTPS. The only minor hiccup was the HTTPS certification for Shake took about 3 days to initialise (it should take less than 24 hours, my other domain took 15 minutes) - but it went away on its own.
  • Collect email: The final step was to get email to the domains working - in general I'd prefer people email me directly at Gmail, but it's always good for email to work. I used these instructions, which use Mailgun to collect and forward emails. The only difficulty is that sending Gmail emails to yourself via a custom domain leaves the email in the Sent mail with no indication it was delivered - I had to test using a different email account.

With that, we have a static website served over HTTPS. It's quite remarkable that such a pipeline can be built using free services.

Dynamic Website

I maintain the server which provides a search engine for Haskell libraries. This website is dynamic, executing Haskell code to return suitable results for each search.

  • Write the program: I wrote Hoogle over the last 14 years, and when run as hoogle server it spawns a web server which can serve requests, using the Warp package to do the actual serving.
  • Configure the server: The server is kindly provided by the Haskell Infrastructure Committee, where I have a VM which runs Hoogle. My setup instructions for that server are in the Hoogle repo. Of note, I forward port 80 to 8080, allowing me to serve HTTP pages with a non-root program.
  • Serve static content over CDN: The static content of Hoogle (images, CSS files) could be served up by the normal server, but it's just one small server in one single location, so I make things go faster by sending most static requests to Raw GitHack, which itself is just a wrapper around Cloudflare.
  • Obtain certificates: To serve over HTTPS you need certificates that prove you own the domain. I got the certificates from Let's Encrypt, using the Certbot client. Since I run a custom server I opted for the Standalone challenge (which spawns a web server on your box), over HTTP, serving on port 8080 to account for the redirection I had put in place. Unfortunately, generating the certificates required taking Hoogle down briefly.
  • Serving over HTTPS: Fortunately a PR was submitted to Hoogle some time ago allowing users to pass a certificate at startup and serve Hoogle over HTTPS. I passed the certificates obtained in the previous step, and spawned Hoogle on 8443 (which 443 redirected too), giving me an HTTPS server.
  • Redirecting HTTP traffic: For the static websites redirecting HTTP traffic to HTTPS was as simple as checking a box on Cloudflare. For my own server I needed to run a server on port 8080 that did the redirect. I found the Haskell program rdr2tls which is small, simple, and works very well.
  • Renewing the certificate: The Let's Encrypt serve expires every 90 days, so will need renewing. I know the approximate steps, but currently am intending to manually renew the certificate.

Switching Hoogle to HTTPS was fairly painless.

by Neil Mitchell ( at February 27, 2018 09:25 PM

FP Complete

Selecting the Right Level of Quality Assurance

You’re implementing a new IT system, an app, a DevOps deployment, a DataOps system to merge your data feeds. And it seems to work all right. So, are you done? How much quality assurance work (QA) should you do before calling it finished?

by Aaron Contorer ( at February 27, 2018 06:56 PM

February 26, 2018

Functional Jobs

Full-Stack Developer (Clojure) at Privacy Company (Full-time)

About the job

Full time position in our office in The Hague, Netherlands

Privacy Company helps businesses solve the privacy compliance problem, with a focus on bringing privacy awareness within reach of those who are not experts or lawyers. We’ve developed a web application, Privacy Nexus, that offers a very practical approach to reach this goal and has been designed to make privacy management easier, more efficient and more fun.

The development team currently consists of two developers and one designer, and we are looking for a new developer who can help us expand the product further. Given the small size of the team, there are countless opportunities to make an impact on both the product and our internal culture.

Our ideal colleague shares our passion for writing clean and modular code, enjoys (or at least is interested in learning) Functional Programming, and has been writing code for fun for years. Our technology stack of choice is Clojure for the backend and ClojureScript for the frontend (with React). We find that Clojure’s simplicity, immutability and data-first approach results in clearer code that is easy to reason about and to test. In contrast, we're not fans of OOP, XML and Design Patterns.

What we expect from you

  • Experience with Clojure gets you major bonus points.
  • Experience with other functional programming languages (like Haskell, Common Lisp, Erlang, etc.) is always appreciated.
  • You understand practical demands, but still strive to do things the right way.
  • You care about understanding problems at their root, with all the attention and dedication it requires.
  • You are a "full-stack" developer: you like to work on anything from database queries and backend code to fine tuning CSS and building React components.
  • You are familiar with command line tools: you know your way around git, bash/zsh/etc, grep (and perhaps, why not, the occasional perl one-liner).
  • You are comfortable working with a database without an ORM.
  • Formal Computer Science education is not a hard requirement, relevant education can be a plus, but we are more interested in what you have built than what you have learned

What we offer

  • A challenging, fun environment with lots of autonomy and self-direction
  • No hierarchy or project managers (we prefer self-organising)
  • Flexibility about when and where you work (however this is not a fully remote position, you should spend at least 1 day / week in the office with us)
  • You get to choose if you want a Mac or Linux laptop (or Windows, but why?)
  • Salary commensurate with your experience (32-40 hours/week)
  • Long term commitment is intended

Like what you hear? Drop us an email at developers [at] privacycompany [dot] eu. If you want to apply for the position, please send your CV along with a link to your Github / GitLab / Bitbucket profile (or simply some code you worked on).

Note to International Applicants: Unfortunately we cannot sponsor Visas to the Netherlands, please apply only if you have a valid EU work permit.

No agency calls please.

Get information on how to apply for this position.

February 26, 2018 03:31 PM

February 25, 2018

Paul Johnson

Haskell with Reactive Banana and GTK3

I've been doing some GUI coding recently using a combination of Reactive Banana and GTK3. I started out with just GTK3, but I could see it wasn't going to scale because everything GTK3 does is in the IO monad. I found I was having to create IORefs to track the application state, and then pass these around for reading and writing by various event handlers. While the application was small this was manageable, but I could see that it was going to grow into a pile of imperative spaghetti as time went on.

I knew about functional reactive programming (FRP), and went on the hunt for a framework that would work with GTK3. I chose Reactive Banana despite the silly name because it seemed to be targeted at desktop GUI applications rather than games and simulations.

Connecting to GTK3

FRP is based around two key abstractions:

  • Events are instants in time that carry some data. When the user clicks on a button in a GUI you want an event to denote the fact that the button was clicked. If the user moves a slider then you want an event with the new position of the slider.

  • Behaviors carry data that changes over time. In theory this change can be continuous; for instance if you simulate a bouncing ball then the position of the ball is a behavior; at any point in time you can query it, and queries at different times will get different answers. However Reactive Banana only supports behaviors that change in response to events and remain constant the rest of the time. Thats fine: my application responds to user events and doesn't need continuous changes. Take the slider event I mentioned above: when the user moves the slider you want to update a sliderPositionbehavior with the latest position so that other parts of the program can then use the value later on.

In Reactive Banana you can convert an event into a behavior with the "stepper" function. You can also get an event back when a behavior changes

Behaviors are a lot like the IORefs I was already using, but events are what make the whole thing scalable. In a large application you may want several things to happen when the user clicks something, but without the Event abstraction all of those things have to directly associated. This harms modularity because it creates dependencies between modules that own callbacks and modules that own widgets, and also causes uncertainty within callbacks about which other callbacks might already have been invoked. With FRP the widget creator can just return an Event without needing to know who receives it, and behaviours are not updated until all events have been executed.

There is already a binding between Reactive Banana and WxHaskell, but nothing for GTK. So my first job was to figure this out. Fortunately it turned out to be very simple. Every widget in GTK3 has three key lists of things in its API:

  • IO functions. These are used to create widgets, and also to get and set various parameters. So for instance the slider widget has functions like this (I'm glossing over some typeclass stuff here. For now just take it that a slider has type Range):

       rangeGetValue :: Range -> IO Double
    rangeSetValue :: Range -> Double -> IO ()

  • Attributes. These are kind of like lenses on the widget, in that they let you both read and write a value. However unlike Haskell lenses this only works in the IO monad. So for instance the slider widget has an attribute:

       rangeValue :: Attr Range Double

    You can access the attributes of a widget using the get and set functions. This is equivalent to using the two IO functions above.

  • Signals. These are hooks where you can attach a callback to a widget using the onfunction. A callback is an IO monad action which is invoked whenever the signal is triggered. This is usually when the user does something, but can also be when the program does something. For instance the slider widget has a signal

       valueChanged :: Signal Range (IO ())
    The last argument is the type of the callback. In this case it takes no parameters and returns no value, so you can hook into it like this:
       on mySlider valueChanged $ do
    v <- rangeGetValue mySlider

One subtlety about GTK3 signals is that they are often only triggered when the underlying value actually changes, rather than every time the underlying setter function is called. So if the slider is on 10 and you call "rangeSetValue 9" then the callback is triggered in exactly the same way as when the user moves it. However if you call "rangeSetValue 10" then the callback is not triggered. This lets you cross-connect widgets without creating endless loops.

Connecting GUI Inputs

The crucial thing is that GTK signals and attributes are isomorphic with Reactive Banana events and behaviors. So the following code gets you quite a long way:

registerIOSignal :: (MonadIO m) =>
-> Signal object (m a)
-> m (a, b)
-> MomentIO (Event b)
registerIOSignal obj sig act = do
(event, runHandlers)
liftIO $ obj `on` sig $ do
(r, v) <- act
liftIO $ runHandlers v
return r
return event

There are a few wrinkles that this has to cope with:

First, a few signal handlers expect the callback to return something other than "()". Hence the "a" type parameter above.

Second, the callback doesn't usually get any arguments, such as the current slider position. Its up to the callback itself to get whatever information it needs. Hence you still need to write some callback code.

Third, some signals work in monads other than just "IO". Usually these are of the form "ReaderT IO" (that is, IO plus some read-only context). The "m" type parameter allows for this.

So now we can get a Reactive Banana event for the slider like this:

   sliderEvent <- registerIOSignal mySlider valueChanged $ do
v <- rangeGetValue mySlider
return ((), v)

The two values in the "return" are the return value for the callback (which is just () in this case) and the value we want to send out in the Event.

Some signals do provide parameters directly to the callback, so you need a family of functions like this:

   registerIOSignal1 :: (MonadIO m) =>
-> Signal object (a -> m b)
-> (a -> m (b, c))
-> MomentIO (Event c)
registerIOSignal2 :: (MonadIO m) =>
-> Signal object (a -> b -> m c)
-> (a -> b -> m (c, d))
-> MomentIO (Event d)

And so on up to registerIOSignal4, which is the longest one I have needed so far.

Connecting Outputs

Outputs are simpler than inputs. Reactive Banana provides a function for linking an event to an IO action:

 reactimate :: Event (IO ()) -> MomentIO ()

This takes an event carrying IO actions and executes those actions as they arrive. The "MomentIO" return value is the monad used for building up networks of events and behaviors: more of that in "Plumbing" below.

Events are functors, so the usual pattern for using reactimate looks like this:

   reportThis :: Event String -> MomentIO ()
reportThis ev = do
let ioEvent = fmap putStrLn ev
reactimate ioEvent

The argument is an event carrying a string. This is converted into an event carrying IO actions using "fmap", and the result is then passed to reactimate. Obviously this can be reduced to a single line but I've split it out here to make things clearer.

So we can link an event to a GTK attribute like this:

   eventLink :: object -> Attr object a -> Event a -> MomentIO ()
eventLink obj attr =
reactimate . fmap (\v -> set obj [attr := v])

Whenever the argument event fires the attribute will be updated with the value carried by the event.

Behaviors can be linked in the same way. Reactive Banana provides the "changes" function to get an event whenever a behavior might have changed. However this doesn't quite work the way you would expect. The type is:

   changes :: Behavior a -> MomentIO (Event (Future a))

The "Future" type reflects the fact that a behavior only changes after the event processing has finished. This lets you write code that cross-links events and behaviors without creating endless loops, but it means you have to be careful when accessing the current value of a behavior. More about this in "Plumbing" below.

To cope with these "Future" values there is a special version of "reactimate" called "reactimate' " (note the tick mark). You use it like this:

   behaviorLink :: object -> Attr object a -> Behavior a -> MomentIO ()
behaviorLink obj attr bhv = do
fe <- changes bhv
reactimate' $ fmap (fmap (\v -> set obj [attr := v])) fe

This will update the attribute whenever an event occurs which feeds in to the behavior. Note that this will still happen even if the new value is the same as the old; unlike GTK the Reactive Banana library doesn't cancel updates if the new and old values are the same.


The Basic Concepts

Reactive Banana events and behaviors are connected together in the MomentIO monad. This is an instance of MonadFix so you can use recursive do notation, letting you create feedback loops between behaviors and events. MomentIO is also an instance of MonadIO, so you can use liftIO to bring GTK widget actions into it.

To set up a dialog containing a bunch of GTK widgets you do the following things in the MomentIO monad:

  • Use liftIO on GTK functions to set up the widgets and arrange them in layout boxes in the same way you would if you were just using bare GTK.

  • Use registerIOSignalto get Reactive Banana events from the widgets.

  • Use the Reactive Banana combinators to create new events and behaviors reflecting the application logic you want.

  • Use eventLinkand behaviorLinkto update widget attributes.

 For instance you can have a pop-up dialog containing a bunch of input widgets with events attached to their values. Lets say these fields are arguments to the "FooData" constructor, and you also have a function "fooValid :: FooData -> Bool". You can then write your code like this:

   fooDialog :: FooData -> MomentIO (Widget, Event FooData)
fooDialog (FooData v1 v2)= do

-- Create GTK widgets w1, w2 and okButton.
-- Put them all in a top-level "dialog" container.
.... your GTK code here.

-- Connect events ev1, ev2 to the values of w1 and w2.
.... your calls to registerIOSignal here.
okClick <- registerIOSignal okButton buttonActivate $ return ((), ())
bhv1 <- stepper v1 ev1
bhv2 <- stepper v2 ev2
fooB = FooData <$> bhv1 <*> bhv2
-- Behavior is an Applicative, so fooB :: Behavior FooData
validInput = fooValid <$> fooB
result = const <$> fooB <@> okClick

behaviorLink okButton widgetSensitive validInput
return (dialog, result)

The last line but one links the "validInput" behavior to the OK button sensitivity. So if the input data is not valid then the OK button is greyed out and will not respond to clicks. You can use the same technique do other more informative things like highlighting the offending widget or displaying a helpful  message in another widget.

The "result =" line needs a bit of explanation. The "<@>" combinator in Reactive Banana works like the applicative "<*>" except that its second argument is an event rather than a behavior. The result is an event that combines the current value of the "fooB" behavior with the value from the "okClick" event. In this case the button click carries no information, so we use the function "const" to just take the current behavior value.

One tip: when writing pure functions that are going to be called using the "<@>" combinator its a good idea to put the argument that will come from the event last.

Keeping it Modular

I have found that these are good rules for designing applications in Reactive Banana:

  • Functions in the MomentIO monad should take events and behaviors as parameters, but only return events.

  • Avoid returning a behavior unless you are sure that this is the only function that needs to change it.

  • Keep the behavior definitions at the top of the call stack where they are used. If in doubt, defer the job of defining the behavior to your caller.

This lets you write independent modules that all have a say in updates to some shared value. The shared value should be represented as a Behavior, and updates to it as Events which either carry new values or (better) update functions. So you have a bunch of independent editor functions and a top level function which looks like this:

   editor1, editor2, editor3 ::
Behavior FooData -> MomentIO (Event (FooData -> FooData))

fooDataManager :: FooData -> MomentIO ()
fooDataManager start = mdo -- Recursive do notation.
edit1 <- editor1 fooB
edit2 <- editor2 fooB
edit3 <- editor3 fooB
fooB <- accumB start $ unions [edit1, edit2, edit3]
-- accumB applies each event function to the accumulated behaviour value.

by Paul Johnson ( at February 25, 2018 05:39 PM

Russell O'Connor

Why Is It Taking 20 Minutes to Mine This Bitcoin Block?

Does this sound familiar?

You have just made a Bitcoin transaction and you are eager to see if it appears in the next block. You know that the expected time between Bitcoin blocks is 10 minutes. You check the log of your Bitcoin node. It has been 7 minutes since the previous block. You recall that blocks occurrences in Bitcoin are a Poisson process, which is memoryless. Even though it has been 7 minutes since the previous block, you still expect to wait another 10 minutes.

Five minutes pass. No new blocks have appeared. You have been staring at your Bitcoin node’s log this entire time. It has now been 12 minutes since the previous block. All your waiting has not changed anything. Even though you have been waiting for 5 minutes, the math says that you are still expected to wait 10 minutes before the next block will appear. A Poisson process is memoryless.

After staring at your Bitcoin node’s log for a further 8 minutes, you finally see a new block. “I swear that this always happens to me,” you say to yourself. “Whenever I’m waiting for my transaction to be confirmed, it always seems that the particular block I’m waiting for takes like 20 minutes to mine.”

My friend, if this has happened to you, you are not alone. This phenomenon is real.

Under the simplifying assumption that Bitcoin’s hashrate is constant, we know that a new block is mined once every 10 minutes on average, and this mining process can be well modeled by a Poisson process. Because Poisson processes are memoryless, at any given time we always expect that the next block will appear, on average, in 10 minutes. This holds no matter how long we have already been waiting. This memorylessness property applies just as well backwards in time as it does forwards in time. That is, if you pick a random point in time, on average, the previous block will have been mined 10 minutes earlier.

This is clear because if you sample a series of events from a Poisson process and take a second sample and reverse the occurrences of that series of events, the two samples will be indistinguishable. Therefore, by this symmetry, it must be the case that when you pick a random point in time, the expected time until the next event is the same as the expected time since the previous event.

“Wait a minute. You are saying that, if I pick a random point in time, we expect the previous block to have been mined 10 minutes in the past, and we expect that the next block will be mined 10 minutes in the future. Doesn’t that mean that we expect a total of 20 minutes between blocks?”

Correct, that is exactly what I am saying. If you pick a random point in time, you expect 20 minutes between the previous block and the next block on average.

“That cannot be true because we know that there are, on average, 10 minutes between blocks, not 20 minutes.”

This apparent paradox is essentially the same as the hitchhiker’s paradox. To resolve this paradox we need to understand that the question, “What is the expected time between blocks?” is underspecified. To compute an expected value we need to know which distribution we are computing the expected value with respect to.

Suppose we observe the Bitcoin blockchain for a while, and we make a list of the time between each successive block. When we average this list of numbers, we will get a value that is close to 10 minutes. Averaging this way corresponds to a distribution where each block interval is sampled with equal probability.

More precisely, the pdf for this distribution of non-negative interval durations is the exponential distribution pdf1(t) = N1 eλ t, where λ is 0.1 min-1, Bitcoin’s block rate, and where N1 is a normalization constant (which in this case is also 0.1 min-1). The expected value of this distribution is ∫t pdf1(t) dt = 10 min.

graph of (0.1*exp(-0.1*x)
pdf1: exponential distribution

Suppose we observe the Bitcoin blockchain for a while, and every day we write down the duration of the block whose interval crosses the 9:00 am mark. When we average this list of numbers, we will get a value that is close to 20 minutes. Averaging this way corresponds to a distribution where each block interval is sampled, not with equal probability, but proportional to how long the interval lasts. For example, we are twice as likely to sample an interval that lasts for 14 minutes than we are to sample an interval that lasts for 7 minutes simply by virtue of the fact that 14 minute intervals last twice as long as 7 minute intervals.

We can take the pdf for the exponential distribution above and multiply it by a linear factor to reweight the probabilities in accordance with how long the interval is. After normalization, the resulting pdf for this distribution is the gamma distribution (which shape parameter 2) pdf2(t) = N2 t eλ t (whose normalization constant N2 is 0.01 min-2). The expected value of this distribution is ∫t pdf2(t) dt = 20 min.

graph of (0.01*x*exp(-0.1*x)
pdf2: gamma distribution with shape parameter 2

We can double-check this result by recalling the time reversing symmetry argument above. When we pick a random point in time, the time until the next block is some random variable X whose pdf is pdf1, and the time since the previous block is a random variable Y whose pdf is also pdf1. Therefore, the total time between the last block and the next block is the random value X + Y. We can compute the distribution for this sum by taking the convolution of pdf1 with itself, and we indeed get pdf2 as a result.

The bias towards picking longer blocks intervals by using the second sampling method accounts for the discrepancy between the two different results when computing average block interval durations. However, the word “bias” is not meant to be pejorative. This other sampling method is not incorrect or with prejudice; it is simply a different way of sampling. The distribution of intervals you need to use depends on the application you are using it for. If you want to compute the throughput of the Bitcoin, you will need to use the exponential distribution. If you want to know “why is does it take 20 minutes to mine the Bitcoin block with my transaction in it?”, you need to use this gamma distribution.

February 25, 2018 04:05 PM

Dominic Steinitz

Reproducibility and Old Faithful


For the blog post still being written on variatonal methods, I referred to the still excellent Bishop (2006) who uses as his example data, the data available in R for the geyser in Yellowstone National Park called “Old Faithful”.

While explaining this to another statistician, they started to ask about the dataset. Since I couldn’t immediately answer their questions (when was it collected? over how long? how was it collected?), I started to look at the dataset more closely. The more I dug into where the data came from, the less certain I became that the dataset actually reflected how Old Faithful actually behaves.

Here’s what I found. On the way, I use Haskell, R embedded in Haskell, data frames in Haskell and nix.

The Investigation

First some imports and language extensions.

> {-# OPTIONS_GHC -Wall                      #-}
> {-# OPTIONS_GHC -fno-warn-unused-top-binds #-}
> {-# LANGUAGE QuasiQuotes          #-}
> {-# LANGUAGE ScopedTypeVariables  #-}
> {-# LANGUAGE DataKinds            #-}
> {-# LANGUAGE FlexibleContexts     #-}
> {-# LANGUAGE TemplateHaskell      #-}
> {-# LANGUAGE TypeOperators        #-}
> {-# LANGUAGE TypeSynonymInstances #-}
> {-# LANGUAGE FlexibleInstances    #-}
> {-# LANGUAGE QuasiQuotes          #-}
> {-# LANGUAGE UndecidableInstances #-}
> {-# LANGUAGE ExplicitForAll       #-}
> {-# LANGUAGE ScopedTypeVariables  #-}
> {-# LANGUAGE OverloadedStrings    #-}
> {-# LANGUAGE GADTs                #-}
> {-# LANGUAGE TypeApplications     #-}
> module Main (main) where
> import Prelude as P
> import Control.Lens
> import Data.Foldable
> import Frames hiding ( (:&) )
> import Data.Vinyl (Rec(..))
> import qualified Language.R as R
> import Language.R (R)
> import Language.R.QQ
> import Data.Int
> import Plots as P
> import qualified Diagrams.Prelude as D
> import Diagrams.Backend.Rasterific
> import Data.Time.Clock(UTCTime, NominalDiffTime, diffUTCTime)
> import Data.Time.Clock.POSIX(posixSecondsToUTCTime)
> import Data.Attoparsec.Text
> import Control.Applicative

R provides many datasets presumably for ease of use. We can access the “Old Faithful” data set in Haskell using some very simple inline R.

> eRs :: R s [Double]
> eRs = R.dynSEXP <$> [r| faithful$eruptions |]
> wRs :: R s [Int32]
> wRs = R.dynSEXP <$> [r| faithful$waiting |]

And then plot the dataset using

> kSaxis :: [(Double, Double)] -> P.Axis B D.V2 Double
> kSaxis xs = P.r2Axis &~ do
>   P.scatterPlot' xs
> plotOF :: IO ()
> plotOF = do
>   es' <- R.runRegion $ do eRs
>   ws' <- R.runRegion $ do wRs
>   renderRasterific "diagrams/Test.png"
>                    (D.dims2D 500.0 500.0)
>                    (renderAxis $ kSaxis $ es' ( fromIntegral ws'))

In the documentation for this dataset, the source is given as Härdle (2012). In here the data are given as a table in Table 3 on page 201. It contains 270 observations not 272 as the R dataset does! Note that there seems to be a correlation between inter-eruption time and duration of eruption and eruptions seem to fall into one of two groups.

The documentation also says “See Also geyser in package MASS for the Azzalini–Bowman version”. So let’s look at this and plot it.

> eAltRs :: R s [Double]
> eAltRs = R.dynSEXP <$> [r| geyser$duration |]
> wAltRs :: R s [Int32]
> wAltRs = R.dynSEXP <$> [r| geyser$waiting |]
> plotOF' :: IO ()
> plotOF' = do
>   es <- R.runRegion $ do _ <- [r| library(MASS) |]
>                          eAltRs
>   ws <- R.runRegion $ do wAltRs
>   renderRasterific "diagrams/TestR.png"
>                    (D.dims2D 500.0 500.0)
>                    (renderAxis $ kSaxis $ es ( fromIntegral ws))

A quick look at the first few entries suggests this is the data in Table 1 of Azzalini and Bowman (1990) but NB I didn’t check each item.

It looks quite different to the faithful dataset. But to answer my fellow statistician’s questions: “This analysis deals with data which were collected continuously from August 1st until August 15th, 1985”.

It turns out there is a website which collects data for a large number of geysers and has historical datasets for them including old faithful. The data is in a slightly awkward format and it is not clear (to me at any rate) what some of the fields mean.

Let’s examine the first 1000 observations in the data set using the Haskell package Frames.

First we ask Frames to automatically type the data and read it into a frame.

> tableTypes "OldFaithFul" "/Users/dom/Dropbox/Tidy/NumMethHaskell/variational/Old_Faithful_eruptions_1000.csv"
> loadOldFaithful :: IO (Frame OldFaithFul)
> loadOldFaithful = inCoreAoS (readTable "/Users/dom/Dropbox/Tidy/NumMethHaskell/variational/Old_Faithful_eruptions_1000.csv")

Declare some columns and helper functions to manipulate the times.

> declareColumn "eruptionTimeNew"  ''UTCTime
> declareColumn "eruptionTimeOld"  ''UTCTime
> declareColumn "eruptionTimeDiff" ''NominalDiffTime
> declareColumn "eruptionTimeRat"  ''Rational
> declareColumn "eruptionTimeDoub" ''Double
> declareColumn "parsedDuration"   ''Double
> epochToUTC :: Integral a => a -> UTCTime
> epochToUTC = posixSecondsToUTCTime . fromIntegral
> getEruptionTimeNew :: OldFaithFul -> Record '[EruptionTimeNew]
> getEruptionTimeNew x = pure (Col ((epochToUTC (x ^. eruptionTimeEpoch)))) :& Nil
> getEruptionTimeOld :: OldFaithFul -> Record '[EruptionTimeOld]
> getEruptionTimeOld x = pure (Col ((epochToUTC (x ^. eruptionTimeEpoch)))) :& Nil
> getEruptionTimeDiff :: Record '[EruptionTimeOld, EruptionTimeNew] -> Record '[EruptionTimeDoub]
> getEruptionTimeDiff x = pure (Col ((/60.0) $ fromRational $ toRational $ diffUTCTime (x ^. eruptionTimeNew) (x ^. eruptionTimeOld))) :& Nil

The duration is expressed as a string mainly as e.g. “3.5m” and “3.5min” but occasionally in other formats. Using NaNs to represent data we can’t parse is poor practice but Frames seems to object to types such as Maybe Double.

> getDuration :: OldFaithFul -> Record '["duration" :-> Text]
> getDuration = (rcast @'[Duration])
> getParsedDuration :: Record '[Duration, EruptionTimeDoub] -> Record '[ParsedDuration]
> getParsedDuration x = pure (Col ((f $ (parseOnly parseDuration) (x ^. duration)))) :& Nil
>   where
>     f (Left _)  = 0.0 / 0.0
>     f (Right y) = y
> parseDuration :: Parser Double
> parseDuration =
>   do d <- double
>      _ <- char 'm'
>      return d
>   <|>
>   do d <- double
>      _ <- string "min"
>      return d

To get the times between eruptions we need to zip the frame with its tail so we define this helper function.

> dropFrame :: Int -> Frame r -> Frame r
> dropFrame n s = Frame ((frameLength s) - 1) (\i -> frameRow s (i + n))
> main :: IO ()
> main = do
>   x <- loadOldFaithful

Get the current eruption times and the previous eruption times and put them in a frame.

>   let tn = dropFrame 1 $ fmap (\b -> (getEruptionTimeNew b)) x
>   let tp = fmap (\b -> (getEruptionTimeOld b)) x
>   let tpns = zipFrames tp tn

Get the durations of the eruptions and exclude any gaps when no observations were taken; arbitrarily if the gap is greate than 1.5 hours.

>   let ds = fmap getDuration x
>   let tds = zipFrames ds (fmap getEruptionTimeDiff tpns)
>   let b = filterFrame (\u -> (u ^. eruptionTimeDoub) <= 150) tds

Parse some of the durations and put the durations and the inter-eruption gap into a single frame.

>   let c = filterFrame (\u -> not $ isNaN $ u ^. parsedDuration) $
>           fmap getParsedDuration b
>   let d = zipFrames b c
>   let g = fmap (\u -> (u ^. parsedDuration, u ^. eruptionTimeDoub)) d

Finally we can yet another data set of observations of old faithful.

>   renderRasterific "diagrams/TestH1000.png"
>                    (D.dims2D 500.0 500.0)
>                    (renderAxis $ kSaxis $ toList g)
>   R.withEmbeddedR R.defaultConfig $ do
>     plotOF
>     plotOF'
>   putStrLn "Finished"

To me this doesn’t look like either of the other two datasets. Perhaps the R faithful data set should be validated, possibly retired and maybe replaced by something with firmer foundations?


I use nix on MACos for package management both for Haskell and R – no more install.packages problems. Here’s my shell.nix file also available here: here.

{ nixpkgs ? import  {}, compiler ? "ghc822", doBenchmark ? false }:


  inherit (nixpkgs) pkgs;

f = { mkDerivation, array, base, bytestring, cassava, containers
    , datasets, diagrams-lib, diagrams-rasterific, foldl, Frames, ghc-prim, hmatrix, hmatrix-gsl
    , inline-r, lens, mtl, pipes, plots, random-fu, R, random-source
    , stdenv, template-haskell, text, typelits-witnesses, vector, vinyl }:
mkDerivation {
  pname = "variational";
  version = "";
  src = ./.;
  isLibrary = false;
  isExecutable = true;
  executableHaskellDepends = [
  executableSystemDepends = [
    pkgs.rPackages.rstan ];
  license = stdenv.lib.licenses.bsd3;

haskellPackages = if compiler == "default"
  then pkgs.haskellPackages
  else pkgs.haskell.packages.${compiler};

variant = if doBenchmark then pkgs.haskell.lib.doBenchmark else;

drv = variant (haskellPackages.callPackage f {});


  if pkgs.lib.inNixShell then drv.env else drv


Azzalini, A, and Adrian Bowman. 1990. “Bowman, A.w.: a Look at Some Data on the Old Faithful Geyser. Appl. Stat. 39, 357-365” 39 (January).

Bishop, C.M. 2006. Pattern Recognition and Machine Learning. Information Science and Statistics. Springer.

Härdle, W. 2012. Smoothing Techniques: With Implementation in S. Springer Series in Statistics. Springer New York.

by Dominic Steinitz at February 25, 2018 09:49 AM

Brent Yorgey

Visiting assistant professor position at Hendrix

My department at Hendrix College is hiring a 1-year visitor in CS for next year. This is my third year at Hendrix and I really love it—the department is friendly and supportive, the administration really cares about faculty and students, and the students, on the whole, are motivated, bright, and enthusiastic. The position is a sabbatical replacement, so there’s no possibility of it turning into something longer than a year; nonetheless, it is a great opportunity for someone interested in pursuing a position at a liberal arts institution who wants to gain more teaching experience (something that is an absolute must for a liberal arts job but can be hard to come by in grad school). I think my department does a great job of supporting one another and helping each other grow as teachers, so it wouldn’t just be something to put on your CV, but a real opportunity to become a better teacher. And Conway, Arkansas is a wonderful place to live—despite what certain prejudices may lead you to believe!

The initial application deadline is soon (February 28)—I forgot to post something about it earlier!—but we’ll continue to accept applications until the position is filled, so don’t hesitate to apply even if you end up missing the deadline by a bit. And of course feel free to contact me with any questions.

Here’s a link to the application.

by Brent at February 25, 2018 03:36 AM

February 23, 2018

FP Complete

Edward Z. Yang

Semantic Import Versioning in the wild

The best and worst thing about semantic import versioning is that it makes BC-breaking changes hard.

In the past few days, Russ Cox has made a splash in a series of white papers describing Go and Versioning. In them, he coins a new term, Semantic Import Versioning, distilling it to the following principle:

If an old package and a new package have the same import path, the new package must be backwards compatible with the old package.

I am very happy Russ has come up with a good name for semantic import versioning, because this concept has been out there for quite a long time, but without a concise name or formulation of its the design. In fact, I would even say that semantic import versioning is inevitable when you take on the premise that you will never break user code. It is so inevitable, that semantic import versioning is already practiced in the wild in a variety of places. Here are a few examples:

  • REST APIs often are versioned with explicit version numbers in the request (e.g., in the URI) to let clients specify what version of the API they want. If a client wishes to upgrade to a new version of the API, they must rewrite their API requests to a new URL. REST APIs are forced to semantic import versioning because the traditional mechanism for avoiding breakage, version bounds, are unavailable in this setting.
  • Stripe's REST API pins each of their customers to the version of their API at the time they subscribed; even if Stripe makes a BC-breaking change in the future, the API for a given customer never changes. In this case, the semantic import is still there, but it is implicit (associated with a customer account) rather than explicit (in the client code); consequently, Stripe is willing to break BC a lot more frequently than would otherwise be acceptable for a REST API. Stripe's blog post points out a very important aspect of maintaining libraries under semantic import versioning, which is that you need to put in the engineering effort to sustainably manage all of the semantic imports available to users.
  • Semantic import versioning is widely practiced in programming languages, in the form of language standards/epochs. In C++, the setting of -std=c++xx specifies a particular semantic version to be "imported". It would be unheard of for a compiler to unilaterally break backwards compatibility of -std=c++11 in a new revision of the compiler; similarly, a user must explicitly migrate to a new language standard to take advantage of any new features. Rust epochs have a similar tenor. The choice between Python 2 and Python 3 is another form of semantic import versioning.
  • Semantic imports don't have to just specify a number. Feature flags, such as {-# LANGUAGE #-} pragmas in GHC Haskell, let users opt into BC-breaking changes at their use-sites.
  • In the deep learning world, ONNX models declare a semantic import to a particular version of an operator set. Operator semantics can evolve in BC-compatible ways without bumping the version, but to take a BC-breaking change, you must update the import statement.

One insight I draw from these examples is that what we call an "import version" is really a specification for some series of implementations. To someone who has spent a lot of time thinking about module systems, this is really a step in the right direction: program against interfaces, not implementations.

Another thing we can observe from these examples are the real world consequences of semantic import versioning. One particular effect stands out: semantic import versioning is challenging for maintainers, because it pressures them to maintain multiple major release branches simultaneously (after all, who wants to use pkg/v2 only to have it immediately unmaintained when pkg/v3 comes out). In the traditional release branch model, where one creates a release branch for each major version, only the most well-staffed software development teams can afford to maintain multiple, active release branches (backporting patches is a lot of work!) The friction involved with managing multiple implementations means that less well staffed projects will have a strong pressure to never break backwards compatibility.

This may not sound like a such a bad thing to the "don't break my stuff" grumps in the audience, but a lot of bugs and security problems have stemmed from being literally unable to outlaw harmful and dangerous APIs with BC-breaking changes. The danger of moving the calculus further towards preserving backwards compatibility is a further entrenchment of bad "first try" APIs. So while I do not deny that a genius of Russ's framing is to describe semantic versioning as part of the package path, it also sets up a bad expectation for the feasibility of BC-breaking changes, when what we should be doing is improving the state of tooling so that making a BC-breaking change is "no big deal." To me, the most promising way to reduce the friction of a BC-breaking change is to organize your software development so that a single codebase, under a single build, implements multiple specifications (v1, v2 and v3). As we saw from the examples, compilers can manage this (GCC supports multiple C++ versions), but traditional programming languages make it hard for libraries to do the same thing.

I don't now exactly how to solve this problem, but I do have a few ideas:

  1. Treat specifications as data. This means you can write code that operates over a specification, and for example, automatically generate the boilerplate necessary to forward from one implementation to another.
  2. Don't ask programmers to manually write diffs. I would never ask you to make a source code change by writing a diff by hand, just because this is the best representation for a VCS to store. Instead, you would just make the edit, and expect the system to figure it out. BC-breaking changes to APIs follow the same principle; it is much simpler and easy to understand if you just make the change, rather than write a description of the change
  3. Package level modularity. In a traditional package management system, I can't release a single bundle of source code which presents multiple "package interfaces". Even in vgo, even if I have a shared codebase implementing v1 and v2, I still have to make two releases to publish a new version of code. This is backwards; there is no reason a single unit of code cannot provide multiple interfaces, and package tools should make this possible.

These are maybe a bit too radical to expect Go to adopt them, but perhaps the next generation of programming languages will explore this design space further.

by Edward Z. Yang at February 23, 2018 05:06 AM

February 20, 2018

Mark Jason Dominus

Composition of utility pole ID tags

In a recent article discussing utility poles, and the metal ID plates they carry, I wondered what the plates were made of:

Steel would rust; and I thought even stainless steel wouldn't last as long as these tags need to. Aluminum is expensive. Tin degrades at low temperatures. … I will go test the tags with a magnet to see if they are ferrous.

They are not ferrous. Probably they are aluminum. My idea that aluminum is too expensive to use for the plates was ridiculous. The pole itself costs a lot of money. The sophisticated electrical equipment on the pole costs thousands of dollars. The insulated wire strung from the pole is made of copper. Compared with all this, a ten-centimeter oval of stamped aluminum is not a big deal.

1.8mm aluminum sheet costs $100 per square meter even if you don't buy it in great quantity. Those aluminum tags probably cost no more than fifty cents each.

by Mark Dominus ( at February 20, 2018 04:40 AM

February 18, 2018

Neil Mitchell

Atomic Expressions Generically

Summary: For certain hints HLint needs to determine if a Haskell expression is atomic. I wrote a generic method to generate expressions and test if they are atomic.

With HLint, if you write a statement such as:

main = print ("Hello")

You get the hint:

Sample.hs:1:14: Warning: Redundant bracket
Why not:

One of ways HLint figures out if brackets are redundant is if the expression inside the brackets is "atomic" - if you never have to bracket it in any circumstances. As an example, a literal string is atomic, but an if expression is not. The isAtom function from haskell-src-exts-util has a list of the types of expression which are atomic, but the Exp type from haskell-src-exts has 55 distinct constructors, and I don't even know what many of them do. How can we check the isAtom function is correct?

One approach is to use human thought, and that's the approach used until now, with reasonable success. However, I've recently written a script which solves the problem more permanently, generating random expressions and checking that isAtom gives the right value. In this post I'm going to outline a few features of how that script works. There are basically three steps:

1) Generate a type-correct Exp

The first step is to generate a random Exp which follows the type definition. Fortunately the Data class in Haskell lets us generate values. We define:

mkValue :: forall a . Data a => Int -> IO a
mkValue depth
| Just x <- cast "aA1:+" = randomElem x
| Just x <- cast [-1 :: Int, 1] = randomElem x
| Just x <- cast [-1 :: Integer, 1] = randomElem x
| AlgRep cs <- dataTypeRep $ dataTypeOf (undefined :: a) =
if depth <= 0 then throwIO LimitReached else fromConstrM (mkValue $ depth - 1) =<< randomElem cs

Here we are saying that given a depth, and a result type a, we generate a value of type a. Note that the a argument is the result, but we don't pass anything in of type a. The first three lines of the body follow the pattern:

    | Just x <- cast [list_of_element] = randomElem x

This tries to convert list_of_element to [a] by using runtime type information. If it succeeds, we pick a random element from the list. If it doesn't we continue onwards.

The final case uses dataTypeRep/dataTypeOf to get a list of the constructors of a. Note that we don't have a value of a, so we make one up using undefined :: a - but that's OK because dataTypeOf promises not to look at its argument. Given a list of constructors, we pick one at random, and then call fromConstrM - which says how to create a value of the right constructor, using some argument to fill in all the fields. We pass mkValue as that argument, which causes us to recursively build up random values.

One immediate problem is what if we are building a [Int] and the random generator often picks (:)? We'll take a very long time to finish. To solve this problem we keep a depth counter, decrement it in every recursive call, and when it runs out, throwIO an exception and give up.

2) Generate a parsing Exp

Now we've got a valid Exp value, but just because an Exp can be represented in the AST doesn't mean it corresponds to Haskell fragment. As an example, consider Var (UnQual (Ident "Test")). That's a valid value of type Exp, but if you pretty print it you get Test, and if you parse it back you'll get Con (UnQual (Ident "Test")) - variables must start with a leading lower-case letter.

To ignore invalid expressions we try pretty printing then parsing the expression, and ignore all expressions which don't roundtrip.

3) Determine if the Exp is atomic

Now we've got a valid Exp, which we know the user could have typed in as a source program, we need to figure out if isAtom is correct. To do that we see if given expression x whether self-application roundtrips, i.e. x x. As a positive example, foo (a variable) roundtrips as foo foo being foo applied to itself. However, if b then t else f when applied to itself gives if b then t else f if b then t else f, which parses back more like if b then t else f (if b then t else f), and is not atomic.

Putting it all together

Now we've got a random expression, and we know if the atomicity agrees with what we were expecting, we can report any differences. That approach has identified many additional patterns to match, but it's not perfect, in particular:

  • Most values either exceed the depth limit or fail to roundtrip. For 10,000 if expressions I typically get 1 or 2 which roundtrip properly. For non-if expressions it's usually 100 or so. The advantage of random testing is that throwing more time at a problem solves such issues without thinking too hard.
  • For some expressions, e.g. ParComp, I've never managed to get a valid value created. Perhaps haskell-src-exts can't parse it, or perhaps it requires constants I don't have in my hardcoded list - none of these were particularly common examples.
  • haskell-src-exts has a bug where -1 is pretty printed as (-1), which is then parsed as a paren and -1. That fails step 2, so we don't test with negative literals. As it happens, non-negative literals are atomic, but negative literals aren't, so we need to take care.
  • There are some patterns which appear to roundtrip successfully on their own, but not when surrounded by brackets, but secretly are just very weird. For example do rec\n [] parses successfully, but with source positions that are error values, and when applied to itself pretty prints incorrectly. There's at least one haskell-src-exts bug here.
  • The program appears to leak progressively more memory. I solved that by running slices of it at a time, and didn't look too hard. I've seen cases of blowup in Data constructors when recursing, so it could be that. but needs investigating.

As a result of all this work a future HLint will spot unnecessary brackets for 20 more types of expression, 8 more types of pattern and 7 more types of type.

by Neil Mitchell ( at February 18, 2018 11:00 PM

Kevin Reid (kpreid)

Blog moved

Moving to

(In the event that you are looking at this post in a distant future where both are abandoned, check my web site for the freshest link.)

by Kevin Reid (kpreid) ( at February 18, 2018 08:39 PM

Michael Snoyman

Haskell Ecosystem Requests

Last month, I clarified some parts of the SLURP proposal. I'm intentionally not getting into the SLURP proposal itself here, if you missed that episode, don't worry about it. One of the outcomes of that blog post was that I shared some of the requests I had made in private that ultimately led to the SLURP proposal.

A single comment in a mega-thread on Github is hardly a good place to write down these requests, however, and it seems like there's no progress on them. I'm going to instead put down these ideas here, with a bit more explanation, and a few more ideas that have popped up since then.

(If you really want to, feel free to see the context of my original comment.)

These points should be made in some kind of more official forum, but:

  1. I'm honestly not sure where that forum is
  2. I don't believe the official forums we typically use for discussions of community infrastructure are nearly visible enough to most community members

So I'll start the conversation here, and later we can move it to the right place.

PVP adherence is optional

I would like to see some kind of statement on Hackage that says something like, "PVP adherence is recommended, but not required. You are free to upload a package even if it does not conform to the PVP." Which I realize is in fact exactly what the current policy is, but in many discussions, this was unclear to people. And have a clear sentence to be quoted when online discussions get heated would be useful. Without something like this, I believe that we will continue having regular online flamewars about the PVP, which is the biggest thing I've been trying to get to stop over the past few years.

Hackage Trustee guidelines

Going along with this, I would like to request a change to the Hackage Trustee guidelines (or whatever the appropriate term is), namely that it is not appropriate to PVP police on social media. Sending PRs and opening issues: totally acceptable. Emails to authors: totally acceptable. If an author requests that these stop: they must stop. Publicly criticizing an author for not following the PVP: unacceptable. I do realize that enforcing a policy on how people behave personally is difficult. But I'd be happy to see the change even if it wasn't easily enforceable.

Downstream projects

Private discussions tried to achieve some kind of technical policy which would avoid breakage to Stackage and Stack. It seems like those private discussions did not reach any conclusion. However, regardless of any technical policy that is put in place, I would request simple goal be stated:

GHC, Hackage, and Cabal will strive to meet the needs of commonly used downstream projects, including but not limited to Stackage, Stack, and Nix.

I'm not asking for any demands of compatibility or testing, simply a stated policy that "it works with cabal-install, that's all that matters" is not a sufficient response.

Maintainer guidelines

There have been a number of issues and pull requests recently where contributors to some infrastructure projects have been discouraged by the unclear process for getting their changes included upstream. See, as examples:

More generally, there is an ongoing culture in some places of goals/agendas/plans being made privately and not shared, which leads to an inability of people outside of an inner circle to contribute. See, for example:

I would like to recommend some maintainer guidelines be put in place for any core Haskell packages and projects. (What constitutes "core" could definitely be up for debate as well.) I'd like to see some rules like:

  • Plans for significant changes must start as an issue in an issue tracker (see Gabriel's golden rule)
  • Plans for major changes should have a mention in a more public forum than an issue tracker. As a concrete example: the newly added ^>= operator has significant impacts on how downstream projects like Stackage interact with dependency bounds, but no public comment period was granted to provide input before the 2.0 release. (And even post release, as referenced above, the full plan has not been revealed.)
  • Pull requests which are rejected are given a reason for being rejected (this includes simple refusal to merge). See, for example, hackage-security #206.

There are likely many other guidelines we could come up with, some more onerous than others. I encourage others to recommend other ideas too. One possible source of inspiration for this could be the maintainer communication advice I wrote up a few years ago.

February 18, 2018 04:41 PM

February 17, 2018

Ken T Takusagawa

[omqzhxkn] Lagrange Four-Square Theorem examples

A listing of numbers and how to express them as a sum of 4 squares will probably provoke curiosity: There isn't an obvious pattern of how to express a given number as sum of 4 squares.  Can all natural numbers be expressed this way?  (Yes, by Lagrange.)  Which numbers can be expressed as the sum of just 3 squares (answer: Legendre Three-Square Theorem), or 2?  As numbers get larger, there seems to be a trend of more ways to express it as 4 or fewer squares, kind of reminiscent of Goldbach conjecture.  What is the rate of growth of the number of ways?  What about cubes and higher powers (Waring's problem)?  There's lots of deep mathematics lurking just beneath the surface.  It's just a short skip and a jump to Fermat's Last Theorem.

We generated 4-square decompositions up to 121=11^2 in order to include 112 = 7 * 4^2, the first instance where Legendre's 3-square theorem applies with an exponent (on 4) greater than 1.  The number which had the most number of ways to express it in the range was 90, with 9.

We also provide a more compact list which expresses each number in the fewest number of squares, but still listing all possibilities for that fewest number of squares.  The full version has 436 lines; the compact version has 188.  The compact version makes it more clear (perhaps inspiring more curiosity) which numbers require 4 squares and which ones can be done in less.

Similar lists could be made for Gauss's Eureka Theorem on sum of 3 triangular numbers and the Goldbach conjecture on the sum of 2 primes.

Haskell source code is here.  Here is a pedagogical excerpt of how to choose num decreasing numbers bounded by 0 and amax.  We use the list as a nondeterminism monad.

choose_n_of_max :: Integer -> Int -> [[Integer]];
choose_n_of_max amax num = case compare num 0 of {
LT -> error "negative choose_n_of_max";
EQ -> return [];
GT -> do {
  x <- [0..amax];
  y <- choose_n_of_max x (pred num);
  return (x:y);

Below is a machine-readable listing of the numbers through 121 and all the ways to express each number as the sum of 4 or fewer squares.

(0,[[0,0,0,0]]) (1,[[1,0,0,0]]) (2,[[1,1,0,0]]) (3,[[1,1,1,0]]) (4,[[1,1,1,1],[2,0,0,0]]) (5,[[2,1,0,0]]) (6,[[2,1,1,0]]) (7,[[2,1,1,1]]) (8,[[2,2,0,0]]) (9,[[2,2,1,0],[3,0,0,0]]) (10,[[2,2,1,1],[3,1,0,0]]) (11,[[3,1,1,0]]) (12,[[2,2,2,0],[3,1,1,1]]) (13,[[2,2,2,1],[3,2,0,0]]) (14,[[3,2,1,0]]) (15,[[3,2,1,1]]) (16,[[2,2,2,2],[4,0,0,0]]) (17,[[3,2,2,0],[4,1,0,0]]) (18,[[3,2,2,1],[3,3,0,0],[4,1,1,0]]) (19,[[3,3,1,0],[4,1,1,1]]) (20,[[3,3,1,1],[4,2,0,0]]) (21,[[3,2,2,2],[4,2,1,0]]) (22,[[3,3,2,0],[4,2,1,1]]) (23,[[3,3,2,1]]) (24,[[4,2,2,0]]) (25,[[4,2,2,1],[4,3,0,0],[5,0,0,0]]) (26,[[3,3,2,2],[4,3,1,0],[5,1,0,0]]) (27,[[3,3,3,0],[4,3,1,1],[5,1,1,0]]) (28,[[3,3,3,1],[4,2,2,2],[5,1,1,1]]) (29,[[4,3,2,0],[5,2,0,0]]) (30,[[4,3,2,1],[5,2,1,0]]) (31,[[3,3,3,2],[5,2,1,1]]) (32,[[4,4,0,0]]) (33,[[4,3,2,2],[4,4,1,0],[5,2,2,0]]) (34,[[4,3,3,0],[4,4,1,1],[5,2,2,1],[5,3,0,0]]) (35,[[4,3,3,1],[5,3,1,0]]) (36,[[3,3,3,3],[4,4,2,0],[5,3,1,1],[6,0,0,0]]) (37,[[4,4,2,1],[5,2,2,2],[6,1,0,0]]) (38,[[4,3,3,2],[5,3,2,0],[6,1,1,0]]) (39,[[5,3,2,1],[6,1,1,1]]) (40,[[4,4,2,2],[6,2,0,0]]) (41,[[4,4,3,0],[5,4,0,0],[6,2,1,0]]) (42,[[4,4,3,1],[5,3,2,2],[5,4,1,0],[6,2,1,1]]) (43,[[4,3,3,3],[5,3,3,0],[5,4,1,1]]) (44,[[5,3,3,1],[6,2,2,0]]) (45,[[4,4,3,2],[5,4,2,0],[6,2,2,1],[6,3,0,0]]) (46,[[5,4,2,1],[6,3,1,0]]) (47,[[5,3,3,2],[6,3,1,1]]) (48,[[4,4,4,0],[6,2,2,2]]) (49,[[4,4,4,1],[5,4,2,2],[6,3,2,0],[7,0,0,0]]) (50,[[4,4,3,3],[5,4,3,0],[5,5,0,0],[6,3,2,1],[7,1,0,0]]) (51,[[5,4,3,1],[5,5,1,0],[7,1,1,0]]) (52,[[4,4,4,2],[5,3,3,3],[5,5,1,1],[6,4,0,0],[7,1,1,1]]) (53,[[6,3,2,2],[6,4,1,0],[7,2,0,0]]) (54,[[5,4,3,2],[5,5,2,0],[6,3,3,0],[6,4,1,1],[7,2,1,0]]) (55,[[5,5,2,1],[6,3,3,1],[7,2,1,1]]) (56,[[6,4,2,0]]) (57,[[4,4,4,3],[5,4,4,0],[6,4,2,1],[7,2,2,0]]) (58,[[5,4,4,1],[5,5,2,2],[6,3,3,2],[7,2,2,1],[7,3,0,0]]) (59,[[5,4,3,3],[5,5,3,0],[7,3,1,0]]) (60,[[5,5,3,1],[6,4,2,2],[7,3,1,1]]) (61,[[5,4,4,2],[6,4,3,0],[6,5,0,0],[7,2,2,2]]) (62,[[6,4,3,1],[6,5,1,0],[7,3,2,0]]) (63,[[5,5,3,2],[6,3,3,3],[6,5,1,1],[7,3,2,1]]) (64,[[4,4,4,4],[8,0,0,0]]) (65,[[6,4,3,2],[6,5,2,0],[7,4,0,0],[8,1,0,0]]) (66,[[5,4,4,3],[5,5,4,0],[6,5,2,1],[7,3,2,2],[7,4,1,0],[8,1,1,0]]) (67,[[5,5,4,1],[7,3,3,0],[7,4,1,1],[8,1,1,1]]) (68,[[5,5,3,3],[6,4,4,0],[7,3,3,1],[8,2,0,0]]) (69,[[6,4,4,1],[6,5,2,2],[7,4,2,0],[8,2,1,0]]) (70,[[5,5,4,2],[6,4,3,3],[6,5,3,0],[7,4,2,1],[8,2,1,1]]) (71,[[6,5,3,1],[7,3,3,2]]) (72,[[6,4,4,2],[6,6,0,0],[8,2,2,0]]) (73,[[5,4,4,4],[6,6,1,0],[7,4,2,2],[8,2,2,1],[8,3,0,0]]) (74,[[6,5,3,2],[6,6,1,1],[7,4,3,0],[7,5,0,0],[8,3,1,0]]) (75,[[5,5,4,3],[5,5,5,0],[7,4,3,1],[7,5,1,0],[8,3,1,1]]) (76,[[5,5,5,1],[6,6,2,0],[7,3,3,3],[7,5,1,1],[8,2,2,2]]) (77,[[6,4,4,3],[6,5,4,0],[6,6,2,1],[8,3,2,0]]) (78,[[6,5,4,1],[7,4,3,2],[7,5,2,0],[8,3,2,1]]) (79,[[5,5,5,2],[6,5,3,3],[7,5,2,1]]) (80,[[6,6,2,2],[8,4,0,0]]) (81,[[6,5,4,2],[6,6,3,0],[7,4,4,0],[8,3,2,2],[8,4,1,0],[9,0,0,0]]) (82,[[5,5,4,4],[6,6,3,1],[7,4,4,1],[7,5,2,2],[8,3,3,0],[8,4,1,1],[9,1,0,0]]) (83,[[7,4,3,3],[7,5,3,0],[8,3,3,1],[9,1,1,0]]) (84,[[5,5,5,3],[6,4,4,4],[7,5,3,1],[8,4,2,0],[9,1,1,1]]) (85,[[6,6,3,2],[7,4,4,2],[7,6,0,0],[8,4,2,1],[9,2,0,0]]) (86,[[6,5,4,3],[6,5,5,0],[7,6,1,0],[8,3,3,2],[9,2,1,0]]) (87,[[6,5,5,1],[7,5,3,2],[7,6,1,1],[9,2,1,1]]) (88,[[6,6,4,0],[8,4,2,2]]) (89,[[6,6,4,1],[7,6,2,0],[8,4,3,0],[8,5,0,0],[9,2,2,0]]) (90,[[6,5,5,2],[6,6,3,3],[7,4,4,3],[7,5,4,0],[7,6,2,1],[8,4,3,1],[8,5,1,0],[9,2,2,1],[9,3,0,0]]) (91,[[5,5,5,4],[7,5,4,1],[8,3,3,3],[8,5,1,1],[9,3,1,0]]) (92,[[6,6,4,2],[7,5,3,3],[9,3,1,1]]) (93,[[6,5,4,4],[7,6,2,2],[8,4,3,2],[8,5,2,0],[9,2,2,2]]) (94,[[7,5,4,2],[7,6,3,0],[8,5,2,1],[9,3,2,0]]) (95,[[6,5,5,3],[7,6,3,1],[9,3,2,1]]) (96,[[8,4,4,0]]) (97,[[6,6,4,3],[6,6,5,0],[7,4,4,4],[8,4,4,1],[8,5,2,2],[9,4,0,0]]) (98,[[6,6,5,1],[7,6,3,2],[7,7,0,0],[8,4,3,3],[8,5,3,0],[9,3,2,2],[9,4,1,0]]) (99,[[7,5,4,3],[7,5,5,0],[7,7,1,0],[8,5,3,1],[9,3,3,0],[9,4,1,1]]) (100,[[5,5,5,5],[7,5,5,1],[7,7,1,1],[8,4,4,2],[8,6,0,0],[9,3,3,1],[10,0,0,0]]) (101,[[6,6,5,2],[7,6,4,0],[8,6,1,0],[9,4,2,0],[10,1,0,0]]) (102,[[6,5,5,4],[7,6,4,1],[7,7,2,0],[8,5,3,2],[8,6,1,1],[9,4,2,1],[10,1,1,0]]) (103,[[7,5,5,2],[7,6,3,3],[7,7,2,1],[9,3,3,2],[10,1,1,1]]) (104,[[6,6,4,4],[8,6,2,0],[10,2,0,0]]) (105,[[7,6,4,2],[8,4,4,3],[8,5,4,0],[8,6,2,1],[9,4,2,2],[10,2,1,0]]) (106,[[6,6,5,3],[7,5,4,4],[7,7,2,2],[8,5,4,1],[9,4,3,0],[9,5,0,0],[10,2,1,1]]) (107,[[7,7,3,0],[8,5,3,3],[9,4,3,1],[9,5,1,0]]) (108,[[6,6,6,0],[7,5,5,3],[7,7,3,1],[8,6,2,2],[9,3,3,3],[9,5,1,1],[10,2,2,0]]) (109,[[6,6,6,1],[8,5,4,2],[8,6,3,0],[10,2,2,1],[10,3,0,0]]) (110,[[7,6,4,3],[7,6,5,0],[8,6,3,1],[9,4,3,2],[9,5,2,0],[10,3,1,0]]) (111,[[6,5,5,5],[7,6,5,1],[7,7,3,2],[9,5,2,1],[10,3,1,1]]) (112,[[6,6,6,2],[8,4,4,4],[10,2,2,2]]) (113,[[6,6,5,4],[8,6,3,2],[8,7,0,0],[9,4,4,0],[10,3,2,0]]) (114,[[7,6,5,2],[7,7,4,0],[8,5,4,3],[8,5,5,0],[8,7,1,0],[9,4,4,1],[9,5,2,2],[10,3,2,1]]) (115,[[7,5,5,4],[7,7,4,1],[8,5,5,1],[8,7,1,1],[9,4,3,3],[9,5,3,0]]) (116,[[7,7,3,3],[8,6,4,0],[9,5,3,1],[10,4,0,0]]) (117,[[6,6,6,3],[7,6,4,4],[8,6,4,1],[8,7,2,0],[9,4,4,2],[9,6,0,0],[10,3,2,2],[10,4,1,0]]) (118,[[7,7,4,2],[8,5,5,2],[8,6,3,3],[8,7,2,1],[9,6,1,0],[10,3,3,0],[10,4,1,1]]) (119,[[7,6,5,3],[9,5,3,2],[9,6,1,1],[10,3,3,1]]) (120,[[8,6,4,2],[10,4,2,0]]) (121,[[7,6,6,0],[8,5,4,4],[8,7,2,2],[9,6,2,0],[10,4,2,1],[11,0,0,0]])

by Ken ( at February 17, 2018 03:23 AM

February 15, 2018

Joachim Breitner

Interleaving normalizing reduction strategies

A little, not very significant, observation about lambda calculus and reduction strategies.

A reduction strategy determines, for every lambda term with redexes left, which redex to reduce next. A reduction strategy is normalizing if this procedure terminates for every lambda term that has a normal form.

A fun fact is: If you have two normalizing reduction strategies s1 and s2, consulting them alternately may not yield a normalizing strategy.

Here is an example. Consider the lambda-term o = (λ, and note that oo → ooo → oooo → …. Let Mi = (λx.(λx.x))(oooo) (with i ocurrences of o). Mi has two redexes, and reduces to either (λx.x) or Mi + 1. In particular, Mi has a normal form.

The two reduction strategies are:

  • s1, which picks the second redex if given Mi for an even i, and the first (left-most) redex otherwise.
  • s2, which picks the second redex if given Mi for an odd i, and the first (left-most) redex otherwise.

Both stratgies are normalizing: If during a reduction we come across Mi, then the reduction terminates in one or two steps; otherwise we are just doing left-most reduction, which is known to be normalizing.

But if we alternatingly consult s1 and s2 while trying to reduce M2, we get the sequence

M2 → M3 → M4 → …

which shows that this strategy is not normalizing.

Afterthought: The interleaved strategy is not actually a reduction strategy in the usual definition, as it not a pure (stateless) function from lambda term to redex.

by Joachim Breitner ( at February 15, 2018 07:17 PM

February 14, 2018

Michael Snoyman

Stack Patching Policy

This blog post is about a potential policy decision affecting the maintenance of the Stack code base itself. It will affect contributors to the project, and those building Stack for other purposes (such as maintainers of Linux distro packages). It will only indirectly affect end users, as hopefully is made clear in the discussion below.

Github issue for official discussion


Until now, every version of Stack that has been released (or even merged to master, unless I'm mistaken) has exclusively used versions of dependencies available on Hackage. It has not used the extra-dep archive or Git repo feature, or submodules to include alternative versions of source code. This means that, for the most part, you get the same Stack whether you get an official download, run stack build inside the source tree, use stack build using a Stackage snapshot, or run cabal install stack.

Now, as it happens, this isn't completely true either. The official Stack binaries pin the dependencies to exact versions which have been tested together, via the stack.yaml file. This means that the latter two approaches of getting Stack binaries may have different behavior, due to the snapshot or the dependency solver choosing different versions. Some distros have already run into bugs because of this.

To pull all of that back in: the official way to get Stack today will guarantee a specific set of dependencies which have gone through the full Stack integration test suite. Some alternative methods may not provide the same level of guarantees. But with a bit of effort, you can force Stack or cabal-install to build exactly the same thing the official binaries provide.

The new problem

One issue that pops up is: what do we do in a situation where an upstream package has a bug, and either cannot (within the timeframe desired) or will not release a new version with a fix? The concrete example that pops up is hackage-security pull request #203 (addressing issue #187), though the specific details aren't too important for the discussion here. The discussion here is about the general rule: what should Stack do in this case?

Four options

Others may be more creative than me, but I can see four different options to respond in a situation like this:

  1. Continue using the officially released upstream version of hackage-security, bugs and all
  2. Fork hackage-security on Hackage, and depend on the fork
  3. Inline the code from hackage-security into Stack itself, and drop the explicit dependency on hackage-security
  4. Include hackage-security via an extra-dep pointing at a Git commit. Our official builds will use the patched version of hackage-security, and anyone building from Hackage will end up with the unpatched version

Option (1) is the status quo: we cannot fix this bug until upstream fixes it. This is a disappointing outcome for users, as we know how to fix the bug, and can imminently do so, but users will continue to suffer regardless. However, it makes maintenance of Stack relatively easy, and has no impact on packagers.

Options (2) and (3) are relatively similar: you end up with a forked version of the codebase feeding into Stack, but all of the code necessary is still available from Hackage. Packagers, and people building with a command like cabal install stack, will still be able to get the right version of the executable, assuming they pin their dependencies the same way we do (as mentioned above).

Option (4) is a more radical departure. It means that cabal install stack, without quite a bit of extra work, will not result in the same executable. You can argue that, given the assumed lack of pinning of dependency versions, this isn't too terribly different from the status quo. And with the patch I've written for hackage-security now, that's basically true. However, it's theoretically possible that, in the future, we could have a patch that changes the API, and makes it impossible to build Stack against the Hackage version of a package. So let's break up option 4 into two subchoices:

  • Option 4a: we can use an extra-dep, but we must ensure that the Stack codebase continues to build against the Hackage version of the package, even if it's missing a bug fix, performance enhancement, or whatever else we wrote
  • Option 4b: free-for-all: use whatever extra-deps we want, and state that there is no support for building from Hackage alone.

My recommendation

I lean towards option 4a. I don't want to upload forks to Hackage (option (2)); it's a confusing situation for users, and may be seen as an aggressive move (which is certainly not the intent here). Option (3) could work, but makes it more painful than it should be to work on the Stack codebase. I'd rather not subject contributors (or myself!) to that.

Option 4b is IMO a step too far: we'd be uploading something to Hackage which we know for a fact could never be built there. At that point, there's not really any reason for uploading to Hackage. And option 1 (we cannot fix bugs) is just too limiting.

The biggest impact I can see is how others will end up packaging Stack. But frankly, this is already a situation that deserves an official discussion. There have certainly been plenty of cases in the past where users tripped on bugs that didn't exist in the official Stack releases, and the Stack team needed to spend inordinate time tracing this back to a bad build. So if nothing else, hopefully this post will spawn some discussion of correct packaging behavior.

Official discussion

As mentioned above, I've created a Github issue for an official discussion of this topic: issue #3866. Other discussions (Disqus below, mailing list, etc) are welcome, but may not receive the full attention of the Stack team.

February 14, 2018 08:38 AM

February 13, 2018

Brent Yorgey

A (work in progress) translation of Joyal’s original paper on species

tl;dr: I’m working on an English translation, with additional commentary, of Joyal’s 1981 paper introducing the concept of combinatorial species. Collaboration and feedback welcome!

Back when I was writing my PhD thesis on combinatorial species, I was aware that André Joyal’s original papers introducing combinatorial species are written in French, which I don’t read. I figured this was no big deal, since there is plenty of secondary literature on species in English (most notably Bergeron et al., which, though originally written in French, has been translated into English by Margaret Readdy). But at some point I asked a question on MathOverflow to which I hadn’t been able to find an answer, and was told that the answer was already in one of Joyal’s original papers!

So I set out to try to read Joyal’s original papers in French (there are two in particular: Une théorie combinatoire des séries formelles, and Foncteurs analytiques et espèces de structures), and found out that it was actually possible since (a) they are mathematics papers, not high literature; (b) I already understand a lot of the mathematics; and (c) these days, there are many easily accessible digital tools to help with the task of translation.

However, although it was possible for me to read them, it was still hard work, and for someone without my background in combinatorics it would be very tough going—which is a shame since the papers are really very beautiful. So I decided to do something to help make the papers and their ideas more widely accessible. In particular, I’m making an English translation of the papers1—or at least of the first one, for now—interspersed with my own commentary to fill in more background, give additional examples, make connections to computation and type theory, or offer additional perspective. I hope it will be valuable to those in the English-speaking mathematics and computer science communities who want to learn more about species or gain more appreciation for a beautiful piece of mathematical history.

This is a long-term project, and not a high priority at the moment; I plan to work on it slowly but steadily. I’ve only worked on the first paper so far, and I’m at least far enough along that I’m not completely embarrassed to publicize it (but not much more than that). I decided to publicize my effort now, instead of waiting until I’m done, for several reasons: first, it may be a very long time before I’m really “done”, and some people may find it helpful or interesting before it gets to that point. Second, I would welcome collaboration, whether in the form of help with the translation itself, editing or extending the commentary, or simply offering feedback on early drafts or fixing typos. You can find an automatically updated PDF with the latest draft here, and the github repo is here. There are also simple instructions for compiling the paper yourself (using stack) should you want to do that.

  1. And yes, I checked carefully, and this is explicitly allowed by the copyright holder (Elsevier) as long as I put certain notices on the first page.↩

by Brent at February 13, 2018 03:07 PM

Manuel M T Chakravarty

PLT engineers needed!

Do you know how to write FP compilers? Would you like to design & implement next-generation, functional(!) smart contract languages with Phil Wadler and myself? Check out Phil’s post and the IOHK job ad.

February 13, 2018 12:44 AM

February 08, 2018

Sandy Maguire

Devlog: Navigation

<article> <header>

Devlog: Navigation


<time>February 8, 2018</time> devlog, neptune

One of the tropes of the golden era of point-n-click adventure games is, would you believe it, the pointing and clicking. In particular, pointing where you’d like the avatar to go, and clicking to make it happen. This post will explore how I made that happen in my neptune game engine.

The first thing we need to do is indicate to the game which parts of the background should be walkable. Like we did for marking hotspots, we’ll use an image mask. Since we have way more density in an image than we’ll need for this, we’ll overlay it on the hotspot mask.

Again, if the room looks like this:

room background

room background

Our mask image would look like this:

room mask

room mask

Here, the walkable section of the image is colored in blue. You’ll notice there’s a hole in the walk mask corresponding to the table in the room; we wouldn’t want our avatar to find a path that causes him to walk through the table.

However there is something important to pay attention to here; namely that we’re making an adventure game. Which is to say that our navigation system doesn’t need to be all that good; progress in the game is blocked more by storytelling and puzzles than it is by the physical location of the player (unlike, for example, in a platformer game.) If the avatar does some unnatural movement as he navigates, it might be immersion-breaking, but it’s not going to be game-breaking.

Which means we can half ass it, if we need to. But I’m getting ahead of myself.

The first thing we’re going to need is a function which samples our image mask and determines if a given position is walkable.

canWalkOn :: Image PixelRGBA8 -> V2 Int -> Bool
canWalkOn img (V2 x y)
    = flip testBit walkableBit
    . getWalkableByte
    $ pixelAt img x y
    getWalkableByte (PixelRGBA8 _ _ b _) = b
    walkableBit = 7

Currying this function against our image mask gives us a plain ol’ function which we can use to query walk-space.

In a 3D game, you’d use an actual mesh to mark the walkable regions, rather than using this mask thing. For that purpose, from here on out we’ll call this thing a navmesh, even though it isn’t strictly an appropriate name in our case.

Because pathfinding algorithms are defined in terms of graphs, the next step is to convert our navmesh into a graph. There are lots of clever ways to do this, but remember, we’re half-assing it. So instead we’re going to do something stupid and construct a square graph by sampling every \(n\) pixels, and connecting it to its orthogonal neighbors if both the sample point and its neighbor are walkable.

It looks like this:

graph building

graph building

Given the navmesh, we sample every \(n\) points, and determine whether or not to put a graph vertex there (white squares are vertices, the black squares are just places we sampled.) Then, we put an edge between every neighboring vertex (the white lines.)

We’re going to want to run A* over this graph eventually, which is implemented in Haskell via Data.Graph.AStar.aStar. This package uses an implicit representation of this graph rather than taking in a graph data structure, so we’ll construct our graph in a manner suitable for aStar.

But first, let’s write some helper functions to ensure we don’t get confused about whether we’re in world space or navigation space.

-- | Sample every n pixels in on the navmesh.
sampleRate :: Float
sampleRate = 4

-- | Newtype to differentiate nav node coordinates from world coordinates.
newtype Nav = Nav { unNav :: Int }
  deriving (Eq, Ord, Num, Integral, Real)

toNav :: V2 Float -> V2 Nav
toNav = fmap round
      . fmap (/ sampleRate)

fromNav :: V2 Nav -> V2 Float
fromNav = fmap (* sampleRate)
        . fmap fromIntegral

toNav and fromNav are roughly inverses of one another – good enough for half-assing it at least. We’ll do all of our graph traversal stuff in nav-space, and use world-space only at the boundaries.

We start with some helper functions:

navBounds :: Image a -> V2 Nav
navBounds = subtract 1
          . toNav
          . fmap fromIntegral
          . imageSize

navBound gives us the largest valid navigation point from an image – this will be useful later when we want to build a graph and don’t want to sample points that are not on it.

The next step is our neighbors function, which should compute the edges for a given node on the navigation step.

neighbors :: Image PixelRGBA8 -> V2 Nav -> HashSet (V2 Nav)
neighbors img v2 = HS.fromList $ do
  let canWalkOn' = canWalkOn img
                 . fmap floor
                 . fmap fromNav

  V2 x y <- fmap (v2 &)
            [ _x -~ 1
            , _x +~ 1
            , _y -~ 1
            , _y +~ 1
  guard $ canWalkOn' v2
  guard $ x >= 0
  guard $ x <= w
  guard $ y >= 0
  guard $ y <= h
  guard . canWalkOn' $ V2 x y
  return $ V2 x y

We use the list monad here to construct all of the possible neighbors – those which are left, right, above and below our current location, respectively. We then guard on each, ensure our current nav point is walkable, that our candidate neighbor is within nav bounds, and finally that the candidate itself is walkable. We need to do this walkable check last, since everything will explode if we try to sample a pixel that is not in the image.

Aside: if you actually have a mesh (or correspondingly a polygon in 2D), you can bypass all of this sampling nonsense by tessellating the mesh into triangles, and using the results as your graph. In my case I didn’t have a polygon, and I didn’t want to write a tessellating algorithm, so I went with this route instead.

Finally we need a distance function, which we will use both for our astar heuristic as well as our actual distance. The actual distance metric we use doesn’t matter, so long as it corresponds monotonically with the actual distance. We’ll use distance squared, because it has this monotonic property we want, and saves us from having to pay the cost of computing square roots.

distSqr :: V2 Nav -> V2 Nav -> Float
distSqr x y = qd (fmap fromIntegral x) (fmap fromIntegral y)

And with that, we’re all set! We can implement our pathfinding by filling in all of the parameters to aStar:

pathfind :: Image PixelRGBA8 -> V2 Float -> V2 Float -> Maybe [V2 Float]
pathfind img = \src dst ->
    fmap fromNav <$> aStar neighbors distSqr (distSqr navDst) navSrc
    navSrc = toNav src
    navDst = toNav dst

Sweet. We can run it, and we’ll get a path that looks like this:

Technically correct, in that it does in fact get from our source location to our destination. But it’s obviously half-assed. This isn’t the path that a living entity would take; as a general principle we try not to move in rectangles if we can help it.

We can improve on this path by attempting to shorten it. In general this is a hard problem, but we can solve that by giving it the old college try.

Our algorithm to attempt to shorten will be a classic divide and conquer approach – pick the two endpoints of your current path, and see if there is a straight line between the two that is walkable throughout its length. If so, replace the path with the line you just constructed. If not, subdivide your path in two, and attempt to shorten each half of it.

Before we actually get into the nuts and bolts of it, here’s a quick animation of how it works. The yellow circles are the current endpoints of the path being considered, and the yellow lines are the potential shortened routes. Whenever we can construct a yellow line that doesn’t leave the walkable region, we replace the path between the yellow circles with the line.

path shortening

path shortening

The “divide and conquer” bit of our algorithm is easy to write. We turn our path list into a Vector so we can randomly access it, and then call out to a helper function sweepWalkable to do the nitty gritty stuff. We append the src and dst to the extrema of the constructed vector because aStar won’t return our starting point in its found path, and because we quantized the dst when we did the pathfinding, so the last node on the path is the closest navpoint, rather than being where we asked the character to move to.

shorten :: Image PixelRGBA8 -> V2 Float -> V2 Float -> [V2 Float] -> [V2 Float]
shorten img src dst path =
    let v = V.fromList $ (src : path) ++ [dst]
     in go 0 (V.length v - 1) v
    go l u v =
      if sweepWalkable img (v V.! l) (v V.! u)
         then [v V.! u]
         else let mid = ((u - l) `div` 2) + l
               in go l mid v ++ go mid u v

The final step, then, is to figure out what this sweepWalkable thing is. Obviously it wants to construct a potential line between its endpoints, but we don’t want to have to sample every damn pixel. Remember, we’re half-assing it. Instead, we can construct a line, but actually only sample the nav points that are closest to it.

In effect this is “rasterizing” our line from its vector representation into its pixel representation.

Using the Pythagorean theorem in navigation space will give us the “length” of our line in navigation space, which corresponds to the number of navpoints we’ll need to sample.

For example, if our line looks like this:

pythagorean theorem

pythagorean theorem

Then the number \(n\) of nav points we need to sample is:

\[ \begin{align*} n &= \lfloor \sqrt{4^2 + 5^2} \rfloor \\ &= \lfloor \sqrt{16 + 25} \rfloor \\ &= \lfloor \sqrt{41} \rfloor \\ &= \lfloor 6.4 \rfloor \\ &= 6 \end{align*} \]

We can then subdivide our line into 6 segments, and find the point on the grid that is closest to the end of each. These points correspond with the nodes that need to be walkable individually in order for our line itself to be walkable. This approach will fail for tiny strands of unwalkable terrain that slices through otherwise walkable regions, but maybe just don’t do that? Remember, all we want is for it to be good enough – half-assing it and all.



So, how do we do it?

sweepWalkable :: Image PixelRGBA8 -> V2 Float -> V2 Float -> Bool
sweepWalkable img src dst =
  let dir   = normalize $ dst - src
      distInNavUnits = round $ distance src dst
      bounds = navBounds img
    in getAll . flip foldMap [0 .. distInNavUnits] $ \n ->
        let me = src + dir ^* (fromIntegral @Int n)
          in All . canWalkOn' img
                 . clamp (V2 0 0) bounds
                 $ toNav me

Sweet! Works great! Our final pathfinding function is thus:

navigate :: Image PixelRGBA8 -> V2 Float -> V2 Float -> Maybe [V2 Float]
navigate img src dst = fmap (shorten img src dst) $ pathfind src dst

Golden, baby.

Next time we’ll talk about embedding a scripting language into our game so we don’t need to wait an eternity for GHC to recompile everything whenever we want to change a line of dialog. Until then!


February 08, 2018 12:00 AM

February 07, 2018

FP Complete

Best Practices for Developing Medical Device Software

At FP Complete we have experience writing Medical Device software that has to go through rigorous compliance steps and eventually be approved by a government regulatory body such as the US Food and Drug Administration (FDA).

by Niklas Hambüchen at February 07, 2018 08:30 PM

Gabriel Gonzalez

The wizard monoid

<html xmlns=""><head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"/> <meta content="text/css" http-equiv="Content-Style-Type"/> <meta content="pandoc" name="generator"/> <style type="text/css">code{white-space: pre;}</style> <style type="text/css">div.sourceCode { overflow-x: auto; } table.sourceCode, tr.sourceCode, td.lineNumbers, td.sourceCode { margin: 0; padding: 0; vertical-align: baseline; border: none; } table.sourceCode { width: 100%; line-height: 100%; } td.lineNumbers { text-align: right; padding-right: 4px; padding-left: 4px; color: #aaaaaa; border-right: 1px solid #aaaaaa; } td.sourceCode { padding-left: 5px; } code > { color: #007020; font-weight: bold; } /* Keyword */ code > span.dt { color: #902000; } /* DataType */ code > span.dv { color: #40a070; } /* DecVal */ code > { color: #40a070; } /* BaseN */ code > span.fl { color: #40a070; } /* Float */ code > { color: #4070a0; } /* Char */ code > { color: #4070a0; } /* String */ code > { color: #60a0b0; font-style: italic; } /* Comment */ code > span.ot { color: #007020; } /* Other */ code > { color: #ff0000; font-weight: bold; } /* Alert */ code > span.fu { color: #06287e; } /* Function */ code > { color: #ff0000; font-weight: bold; } /* Error */ code > span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */ code > { color: #880000; } /* Constant */ code > { color: #4070a0; } /* SpecialChar */ code > span.vs { color: #4070a0; } /* VerbatimString */ code > { color: #bb6688; } /* SpecialString */ code > { } /* Import */ code > { color: #19177c; } /* Variable */ code > { color: #007020; font-weight: bold; } /* ControlFlow */ code > span.op { color: #666666; } /* Operator */ code > span.bu { } /* BuiltIn */ code > span.ex { } /* Extension */ code > span.pp { color: #bc7a00; } /* Preprocessor */ code > { color: #7d9029; } /* Attribute */ code > { color: #ba2121; font-style: italic; } /* Documentation */ code > { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */ code > { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */ code > { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */ </style></head><body>

Recent versions of GHC 8.0 provides a Monoid instance for IO and this post gives a motivating example for why this instance is useful by building combinable "wizard"s.


I'll define a "wizard" as a program that prompts a user "up front" for multiple inputs and then performs several actions after all input has been collected.

Here is an example of a simple wizard:

main :: IO ()
main = do
-- First, we request all inputs:
putStrLn "What is your name?"
name <- getLine

putStrLn "What is your age?"
age <- getLine

-- Then, we perform all actions:
putStrLn ("Your name is: " ++ name)
putStrLn ("Your age is: " ++ age)

... which produces the following interaction:

What is your name?
What is your age?
Your name is: Gabriel
Your age is: 31

... and here is an example of a slightly more complex wizard:

import qualified System.Directory

main :: IO ()
main = do
-- First, we request all inputs:
files <- System.Directory.listDirectory "."
let askFile file = do
putStrLn ("Would you like to delete " ++ file ++ "?")
response <- getLine
case response of
"y" -> return [file]
_ -> return []

listOfListOfFilesToRemove <- mapM askFile files
let listOfFilesToRemove = concat listOfListOfFilesToRemove

-- Then, we perform all actions:
let removeFile file = do
putStrLn ("Removing " ++ file)
System.Directory.removeFile file
mapM_ removeFile listOfFilesToRemove

... which produces the following interaction:

Would you like to delete file1.txt?
Would you like to delete file2.txt?
Would you like to delete file3.txt?
Removing file1.txt
Removing file3.txt

In each example, we want to avoid performing any irreversible action before the user has completed entering all requested input.


Let's revisit our first example:

main :: IO ()
main = do
-- First, we request all inputs:
putStrLn "What is your name?"
name <- getLine

putStrLn "What is your age?"
age <- getLine

-- Then, we perform all actions:
putStrLn ("Your name is: " ++ name)
putStrLn ("Your age is: " ++ age)

This example is really combining two separate wizards:

  • The first wizard requests and displays the user's name
  • The second wizard requests and displays the user's age

However, we had to interleave the logic for these two wizards because we needed to request all inputs before performing any action.

What if there were a way to define these two wizards separately and then combine them into a larger wizard? We can do so by taking advantage of the Monoid instance for IO, like this:

import Data.Monoid ((<>))

name :: IO (IO ())
name = do
putStrLn "What is your name?"
x <- getLine
return (putStrLn ("Your name is: " ++ x))

age :: IO (IO ())
age = do
putStrLn "What is your age?"
x <- getLine
return (putStrLn ("Your age is: " ++ x))

runWizard :: IO (IO a) -> IO a
runWizard request = do
respond <- request

main :: IO ()
main = runWizard (name <> age)

This program produces the exact same behavior as before, but now all the logic for dealing with the user's name is totally separate from the logic for dealing with the user's age.

The way this works is that we split each wizard into two parts:

  • the "request" (i.e. prompting the user for input)
  • the "response" (i.e. performing an action based on that input)

... and we do so at the type-level by giving each wizard the type IO (IO ()):

name :: IO (IO ())

age :: IO (IO ())

The outer IO action is the "request". When the request is done the outer IO action returns an inner IO action which is the "response". For example:

--      ↓ The request
name :: IO (IO ())
-- ↑ The response
name = do
putStrLn "What is your name?"
x <- getLine
-- ↑ Everything above is part of the outer `IO` action (i.e. the "request")

-- ↓ This return value is the inner `IO` action (i.e. the "response")
return (putStrLn ("Your name is: " ++ x))

We combine wizards using the (<>) operator, which has the following behavior when specialized to IO actions:

ioLeft <> ioRight

= do resultLeft <- ioLeft
resultRight <- ioRight
return (resultLeft <> resultRight)

In other words, if you combine two IO actions you just run each IO action and then combine their results. This in turn implies that if we nest two IO actions then we repeat this process twice:

requestLeft <> requestRight

= do respondLeft <- requestLeft
respondRight <- requestRight
return (respondLeft <> respondRight)

= do respondLeft <- requestLeft
respondRight <- requestRight
return (do
unitLeft <- respondLeft
unitRight <- respondRight
return (unitLeft <> unitRight) )

-- Both `unitLeft` and `unitRight` are `()` and `() <> () = ()`, so we can
-- simplify this further to:
= do respondLeft <- requestLeft
respondRight <- requestRight
return (do
respondRight )

In other words, when we combine two wizards we combine their requests and then combine their responses.

This works for more than two wizards. For example:

request0 <> request1 <> request2 <> request3

= do respond0 <- request0
respond1 <- request1
respond2 <- request2
respond3 <- request3
return (do
respond3 )

To show this in action, let's revisit our original example once again:

import Data.Monoid ((<>))

name :: IO (IO ())
name = do
putStrLn "What is your name?"
x <- getLine
return (putStrLn ("Your name is: " ++ x))

age :: IO (IO ())
age = do
putStrLn "What is your age?"
x <- getLine
return (putStrLn ("Your age is: " ++ x))

runWizard :: IO (IO a) -> IO a
runWizard request = do
respond <- request

main :: IO ()
main = runWizard (name <> age)

... and this time note that name and age are awfully similar, so we can factor them out into a shared function:

import Data.Monoid ((<>))

prompt :: String -> IO (IO ())
prompt attribute = do
putStrLn ("What is your " ++ attribute ++ "?")
x <- getLine
return (putStrLn ("Your " ++ attribute ++ " is: " ++ x))

runWizard :: IO (IO a) -> IO a
runWizard request = do
respond <- request

main :: IO ()
main = runWizard (prompt "name" <> prompt "age")

We were not able to factor out this shared logic back when the logic for the two wizards were manually interleaved. Once we split them into separate logical wizards then we can begin to exploit shared structure to compress our program.

This program compression lets us easily add new wizards:

import Data.Monoid ((<>))

prompt :: String -> IO (IO ())
prompt attribute = do
putStrLn ("What is your " ++ attribute ++ "?")
x <- getLine
return (putStrLn ("Your " ++ attribute ++ " is: " ++ x))

runWizard :: IO (IO a) -> IO a
runWizard request = do
respond <- request

main :: IO ()
main = runWizard (prompt "name" <> prompt "age" <> prompt "favorite color")

... and take advantage of standard library functions that work on Monoids, like foldMap so that we can mass-produce wizards:

import Data.Monoid ((<>))

prompt :: String -> IO (IO ())
prompt attribute = do
putStrLn ("What is your " ++ attribute ++ "?")
x <- getLine
return (putStrLn ("Your " ++ attribute ++ " is: " ++ x))

runWizard :: IO (IO a) -> IO a
runWizard request = do
respond <- request

main :: IO ()
main = runWizard (foldMap prompt [ "name", "age", "favorite color", "sign" ])

More importantly, we can now easily see at a glance what our program does and ease of reading is a greater virtue than ease of writing.

Final example

Now let's revisit the file removal example through the same lens:

import qualified System.Directory

main :: IO ()
main = do
-- First, we request all inputs:
files <- System.Directory.listDirectory "."
let askFile file = do
putStrLn ("Would you like to delete " ++ file ++ "?")
response <- getLine
case response of
"y" -> return [file]
_ -> return []

listOfListOfFilesToRemove <- mapM askFile files
let listOfFilesToRemove = concat listOfListOfFilesToRemove

-- Then, we perform all actions:
let removeFile file = do
putStrLn ("Removing " ++ file)
System.Directory.removeFile file
mapM_ removeFile listOfFilesToRemove

We can simplify this using the same pattern:

import qualified System.Directory

main :: IO ()
main = do
files <- System.Directory.listDirectory "."
runWizard (foldMap prompt files)

prompt :: FilePath -> IO (IO ())
prompt file = do
putStrLn ("Would you like to delete " ++ file ++ "?")
response <- getLine
case response of
"y" -> return (do
putStrLn ("Removing " ++ file)
System.Directory.removeFile file )
_ -> return (return ())

runWizard :: IO (IO a) -> IO a
runWizard request = do
respond <- request

All we have to do is define a wizard for processing a single file, mass-produce the wizard using foldMap and the Monoid instance for IO takes care of bundling all the requests up front and threading the selected files to be removed afterwards.


This pattern does not subsume all possible wizards that users might want to write. For example, if the wizards depend on one another then this pattern breaks down pretty quickly. However, hopefully this provides an example of you can chain the Monoid instance for IO with other Monoid instance (even itself!) to generate emergent behavior.


by Gabriel Gonzalez ( at February 07, 2018 03:56 PM

February 06, 2018

Douglas M. Auclair (geophf)

January 2018 1Liner 1HaskellADay problems and solutions

  • January 8th, 2018: from Nicoλas‏ @BeRewt
    A small @1HaskellADay, old-school. Define foo:

    > foo 3 [1..5]
    [([1,2,3], 4), ([2,3,4], 5)]

    > foo 2 [1..4]
    [([1,2], 3), ([2,3], 4)]

    > foo 2 [1..20]
    [([1,2],3), ([2,3],4), ..., ([18,19],20)]

    > foo 20 [1..2]
    • Demiurge With a Teletype @mrkgrnao
      foo n
        = tails
        # filter (length # (> n))
        # map (splitAt n # second head)

      (#) = flip (.)
    • Andreas Källberg @Anka213
      I haven't tested it, but this should work:
      foo n xs = [ (hd,x) | (hd , x:_) <- n="" splitat=""> tails xs ]
    • <- n="" splitat="">Nicoλas @BeRewt foo n = zip <$> fmap (take n) . tails <*> drop n
  • January 5th, 2018: You have the following DAG-paths:

    a -> b -> c -> e
    a -> b -> d -> e
    q -> r -> s
    w -> x
    y -> z

    and many more.

    From a path, provide a bi-directional encoding* given maximum graph depth is, say, 7, max number of roots is, say, 10, and max number of nodes is, say, 1000.
    • *bi-directional encoding of a graph path:

      DAG path -> enc is unique for an unique DAG path
      enc -> DAG path yields the same DAG path that created the unique enc.

      *DAG: "Directed, acyclic graph."
  • January 5th, 2018: given s :: Ord k => a -> (k,[v])

    define f using s

    f :: Ord k => [a] -> Map k [v]

    with no duplicate k in [a]
    • Christian Bay @the_greenbourne f = foldr (\e acc -> uncurry M.insert (s e) acc) M.empty
      • me: you can curry away the acc variable easily
      • Christian Bay @the_greenbourne You're right :)
        f = foldr (uncurry M.insert . s) M.empty
    • Bazzargh @bazzargh fromList.(map s) ?
      • me: Yuppers

by geophf ( at February 06, 2018 09:43 AM

Ken T Takusagawa

[iblofees] Calendar Facts

Here is a machine readable version of xkcd #1930 "Calendar Facts", perhaps useful for followup projects like creating a random fact generator.  Further notes follow.

Sequence [Atom "Did you know that",Choice [Sequence [Atom "the",Choice [Sequence [Choice [Atom "fall",Atom "spring"],Atom "equinox"],Sequence [Choice [Atom "winter",Atom "summer"],Choice [Atom "solstice",Atom "Olympics"]],Sequence [Choice [Atom "earliest",Atom "latest"],Choice [Atom "sunrise",Atom "sunset"]]]],Sequence [Atom "Daylight",Choice [Atom "Saving",Atom "Savings"],Atom "Time"],Sequence [Atom "leap",Choice [Atom "day",Atom "year"]],Atom "Easter",Sequence [Atom "the",Choice [Atom "Harvest",Atom "super",Atom "blood"],Atom "moon"],Atom "Toyota Truck Month",Atom "Shark Week"],Choice [Sequence [Atom "happens",Choice [Atom "earlier",Atom "later",Atom "at the wrong time"],Atom "every year"],Sequence [Atom "drifts out of sync with the",Choice [Atom "sun",Atom "moon",Atom "zodiac",Sequence [Choice [Atom "Gregorian",Atom "Mayan",Atom "lunar",Atom "iPhone"],Atom "calendar"],Atom "atomic clock in Colorado"]],Sequence [Atom "might",Choice [Atom "not happen",Atom "happen twice"],Atom "this year"]],Atom "because of",Choice [Sequence [Atom "time zone legislation in",Choice [Atom "Indiana",Atom "Arizona",Atom "Russia"]],Atom "a decree by the Pope in the 1500s",Sequence [Choice [Atom "precession",Atom "libration",Atom "nutation",Atom "libation",Atom "eccentricity",Atom "obliquity"],Atom "of the",Choice [Atom "moon",Atom "sun",Atom "earth's axis",Atom "equator",Atom "prime meridian",Sequence [Choice [Atom "International Date",Atom "Mason-Dixon"],Atom "line"]]],Atom "magnetic field reversal",Sequence [Atom "an arbitrary decision by",Choice [Atom "Benjamin Franklin",Atom "Isaac Newton",Atom "FDR"]]],Atom "? ",Atom "Apparently",Choice [Atom "it causes a predictable increase in car accidents",Atom "that's why we have leap seconds",Atom "scientists are really worried",Sequence [Atom "it was even more extreme during the",Choice [Sequence [Choice [Atom "Bronze",Atom "Ice"],Atom "Age"],Atom "Cretaceous",Atom "1990s"]],Sequence [Atom "there's a proposal to fix it, but it",Choice [Atom "will never happen",Atom "actually makes things worse",Atom "is stalled in Congress",Atom "might be unconstitutional"]],Atom "it's getting worse and no one knows why"],Atom ". ",Atom "While it may seem like trivia, it",Choice [Atom "causes huge headaches for software developers",Atom "is taken advantage of by high-speed traders",Atom "triggered the 2003 Northeast Blackout",Atom "has to be corrected for by GPS satellites",Atom "is now recognized as a major cause of World War I"],Atom "."]

Including the mouseover text, the grammar encodes 780,000 facts.

The above grammar is the output of "show" by a Haskell program where we typed the grammar slightly more compactly, using/abusing the OverloadedStrings language extension and a Num instance.  OverloadedStrings is a nice language extension for when we have a large number of literals in the source.  Excerpts of the full source code below:

{-# LANGUAGE OverloadedStrings #-}

data Grammar = Atom String | Choice [Grammar] | Sequence [Grammar] deriving (Show,Eq);

instance IsString Grammar where {
fromString = Atom;

instance Num Grammar where {
(+) x y = Choice [x,y];
(*) x y = Sequence [x,y];
abs = undefined;
signum = undefined;
fromInteger = undefined;
negate = undefined;

facts :: Grammar;
facts =
    "Did you know that"

by Ken ( at February 06, 2018 03:40 AM

February 04, 2018

Michael Snoyman

The Conduitpocalypse

At the end of last week, I made a number of breaking releases of libraries. The API impact of these changes was relatively minor, so most code should continue to work with little to no modification. I'm going to call out the major motivating changes for these releases below. If I leave out a package from explanations, assume the reason is just "upstream breaking changes caused breaking changes here." And of course check the relevant ChangeLogs for more details.

For completeness, the list of packages I released at the end of last week is:

  • conduit-1.3.0
  • conduit-extra-1.3.0
  • network-conduit-tls-1.3.0
  • resourcet-1.2.0
  • xml-conduit-1.8.0
  • html-conduit-1.3.0
  • xml-hamlet-0.5.0
  • persistent-2.8.0
  • persistent-mongoDB-2.8.0
  • persistent-mysql-2.8.0
  • persistent-postgresql-2.8.0
  • persistent-sqlite-2.8.0
  • persistent-template-
  • persistent-test-
  • conduit-combinators-1.3.0
  • yesod-1.6.0
  • yesod-auth-1.6.0
  • yesod-auth-oauth-1.6.0
  • yesod-bin-1.6.0
  • yesod-core-1.6.0
  • yesod-eventsource-1.6.0
  • yesod-form-1.6.0
  • yesod-newsfeed-
  • yesod-persistent-1.6.0
  • yesod-sitemap-1.6.0
  • yesod-static-1.6.0
  • yesod-test-1.6.0
  • yesod-websockets-0.3.0
  • classy-prelude-1.4.0
  • classy-prelude-conduit-1.4.0
  • classy-prelude-yesod-1.4.0
  • mutable-containers-0.3.4

Switching to MonadUnliftIO

The primary instigator for this set of releases was moving my libraries over from MonadBaseControl and MonadCatch/MonadMask (from the monad-control and exceptions packages, respectively) over to MonadUnliftIO. I gave a talk recently (slides) at LambdaWorld about this topic, and have blogged at length as well. Therefore, I'm not going to get into the arguments here of why I think MonadUnliftIO is a better solution to this class of problems.

Unless I missed something, this change dropped direct dependency on the monad-control, lifted-base, and lifted-async packages throughout all of the packages listed above. The dependency on the exceptions package remains, but only for using the MonadThrow typeclass, not the MonadCatch and MonadMask typeclasses. (This does leave open a question of whether we should still define valid instances of MonadCatch and MonadMask, see rio issue #38.)

User impact: You may need to switch some usages of the lifted-base package to unliftio or similar, and update some type signatures. It's possible that if you're using a monad transformer stack which is not an instance of MonadUnliftIO that you'll face compilation issues.

Safer runResourceT

In previous versions of the resourcet package, if you register a cleanup action which throws an exception itself, the exception would be swallowed. In this new release, any exceptions thrown during cleanup will be rethrown by runResourceT.

conduit cleanups

There were some big-ish changes to conduit:

  • Drop finalizers from the library, as discussed previously. This resulted in the removal of the yieldOr and addCleanup functions, and the replacement of the ResumableSource and ResumableConduit types with the SealedConduitT type.
  • Deprecated the old type synonyms and operators from the library. This has been planned for a long time.
  • Moved the Conduit and Data.Conduit.Combinators modules from conduit-combinators into conduit itself. This increases the dependency footprint of conduit itself, but makes it a fully loaded streaming data library. conduit-combinators is now an empty library.

Yesod: no more transformers!

The changes mentioned in my last blog post have been carried out. The biggest impact of that is replacing HandlerT and WidgetT (as transformers over IO) with HandlerFor and WidgetFor, as concrete monads parameterized by the site data type. Thanks to backwards compat HandlerT and WidgetT type synonyms, and the Template Haskell-generated Handler and Widget synonyms being updated automatically, hopefully most users will feel almost no impact from this. (Authors of subsites, however, will likely have a more significant amount of work to do.)

That's it?

Yeah, this post turned out much smaller than I expected. There are likely breakages that I've forgotten about and which should be called out. I'll ask that if anyone notices particular breakages they needed to work around, to please either include a note below or send a PR to this blog post (link above) adding information on the change.

February 04, 2018 10:15 AM

February 03, 2018

Roman Cheplyaka

Undefined behavior with StablePtr in Haskell

What will the following Haskell code snippet do?

  ptr1 <- newStablePtr 1
  ptr2 <- newStablePtr 2

  print =<< deRefStablePtr ptr1

Of course it will print 1… most of the time. But it’s also easy to make it print 2.

Here is the full program:

import Foreign.StablePtr

main = do
  ptr <- newStablePtr ()
  freeStablePtr ptr
  freeStablePtr ptr

  ptr1 <- newStablePtr 1
  ptr2 <- newStablePtr 2

  print =<< deRefStablePtr ptr1

If I compile it with ghc 8.0.2 on x86-64 Linux, it prints 2 and exits cleanly. (Under ghci or runghc, it also prints 2 but then gets a SIGBUS.) You can imagine how much fun it was to debug this issue in a complex library with a non-obvious double free.

The docs for freeStablePtr say:

Dissolve the association between the stable pointer and the Haskell value. Afterwards, if the stable pointer is passed to deRefStablePtr or freeStablePtr, the behaviour is undefined.

As far as undefined behaviors go, this one is fairly benign — at least it didn’t delete my code or install a backdoor.

Let’s see what’s going on here. The relevant definitions are in includes/stg/Types.h, includes/rts/Stable.h, rts/Stable.h, and rts/Stable.c. The excerpts below are simplified compared to the actual ghc source.

A stable pointer is just an integer index into an array, stable_ptr_table, although it is represented as a void*.

 * Stable Pointers: A stable pointer is represented as an index into
 * the stable pointer table.
 * StgStablePtr used to be a synonym for StgWord, but stable pointers
 * are guaranteed to be void* on the C-side, so we have to do some
 * occasional casting. Size is not a matter, because StgWord is always
 * the same size as a void*.

typedef void* StgStablePtr;

typedef struct {
    StgPtr addr;
} spEntry;

spEntry *stable_ptr_table;

This is how the table works:

  • If an index i is allocated to a valid StablePtr, then stable_ptr_table[i].addr points to whatever heap object the stable pointer is supposed to point to.

    StgPtr deRefStablePtr(StgStablePtr sp)
        return stable_ptr_table[(StgWord)sp].addr;
  • If the index i is free (not allocated to any valid StablePtr), then the corresponding entry in the table acts as a node in a linked list that contains all free entries. The variable stable_ptr_free contains a pointer to the start of this linked list.

    static spEntry *stable_ptr_free;
    /* Free a StgStablePtr */
    void freeSpEntry(spEntry *sp)
        sp->addr = (P_)stable_ptr_free;
        stable_ptr_free = sp;
    /* Allocate a fresh StgStablePtr */
    StgStablePtr getStablePtr(StgPtr p)
      StgWord sp;
      sp = stable_ptr_free - stable_ptr_table;
      stable_ptr_free  = (spEntry*)(stable_ptr_free->addr);
      stable_ptr_table[sp].addr = p;
      return (StgStablePtr)(sp);

Suppose that the first two free entries in stable_ptr_table are 18 and 19, which is what I actually observe at the program startup. (The RTS creates a few stable pointers of its own for the purpose of running the program, which explains why these don’t start at 0.) Here’s the double-free code annotated with what happens on each step.

  ptr <- newStablePtr ()
  -- ptr == 18
  -- stable_ptr_free == &stable_ptr_table[19]
  -- stable_ptr_table[18].addr is a pointer to a Haskell value ()
  freeStablePtr ptr
  -- stable_ptr_free == &stable_ptr_table[18]
  -- stable_ptr_table[18].addr == &stable_ptr_table[19]
  freeStablePtr ptr
  -- stable_ptr_free == &stable_ptr_table[18]
  -- stable_ptr_table[18].addr == &stable_ptr_table[18]

  ptr1 <- newStablePtr 1
  -- ptr1 == 18
  -- stable_ptr_free == &stable_ptr_table[18]
  -- stable_ptr_table[18].addr is a pointer to a Haskell value 1
  ptr2 <- newStablePtr 2
  -- ptr2 == 18
  -- stable_ptr_free is a pointer to a Haskell value 1
  -- stable_ptr_table[18].addr is a pointer to a Haskell value 2

Because stable_ptr_free now points into the Haskell heap and outside of stable_ptr_table, further allocations of stable pointers will corrupt Haskell memory and eventually result in SIGBUS or SIGSEGV.

February 03, 2018 08:00 PM

February 02, 2018

Joachim Breitner

The magic “Just do it” type class

One of the great strengths of strongly typed functional programming is that it allows type driven development. When I have some non-trivial function to write, I first write its type signature, and then the writing the implementation often very obvious.

Once more, I am feeling silly

In fact, it often is completely mechanical. Consider the following function:

foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))

This is somewhat like the bind for a combination of the error monad and the reader monad, and remembers the intermediate result, but that doesn’t really matter now. What matters is that once I wrote that type signature, I feel silly having to also write the code, because there isn’t really anything interesting about that.

Instead, I’d like to tell the compiler to just do it for me! I want to be able to write

foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))
foo = justDoIt

And now I can! Assuming I am using GHC HEAD (or eventually GHC 8.6), I can run cabal install ghc-justdoit, and then the following code actually works:

{-# OPTIONS_GHC -fplugin=GHC.JustDoIt.Plugin #-}
import GHC.JustDoIt
foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))
foo = justDoIt

What is this justDoIt?

*GHC.LJT GHC.JustDoIt> :browse GHC.JustDoIt
class JustDoIt a
justDoIt :: JustDoIt a => a
(…) :: JustDoIt a => a

Note that there are no instances for the JustDoIt class -- they are created, on the fly, by the GHC plugin GHC.JustDoIt.Plugin. During type-checking, it looks as these JustDoIt t constraints and tries to construct a term of type t. It is based on Dyckhoff’s LJT proof search in intuitionistic propositional calculus, which I have implemented to work directly on GHC’s types and terms (and I find it pretty slick). Those who like Unicode can write (…) instead.

What is supported right now?

Because I am working directly in GHC’s representation, it is pretty easy to support user-defined data types and newtypes. So it works just as well for

data Result a b = Failure a | Success b
newtype ErrRead r e a = ErrRead { unErrRead :: r -> Result e a }
foo2 :: ErrRead r e a -> (a -> ErrRead r e b) -> ErrRead r e (a,b)
foo2 = (…)

It doesn’t infer coercions or type arguments or any of that fancy stuff, and carefully steps around anything that looks like it might be recursive.

How do I know that it creates a sensible implementation?

You can check the generated Core using -ddump-simpl of course. But it is much more convenient to use inspection-testing to test such things, as I am doing in the Demo file, which you can skim to see a few more examples of justDoIt in action. I very much enjoyed reaping the benefits of the work I put into inspection-testing, as this is so much more convenient than manually checking the output.

Is this for real? Should I use it?

Of course you are welcome to play around with it, and it will not launch any missiles, but at the moment, I consider this a prototype that I created for two purposes:

  • To demonstrates that you can use type checker plugins for program synthesis. Depending on what you need, this might allow you to provide a smoother user experience than the alternatives, which are:

    • Preprocessors
    • Template Haskell
    • Generic programming together with type-level computation (e.g. generic-lens)
    • GHC Core-to-Core plugins

    In order to make this viable, I slightly changed the API for type checker plugins, which are now free to produce arbitrary Core terms as they solve constraints.

  • To advertise the idea of taking type-driven computation to its logical conclusion and free users from having to implement functions that they have already specified sufficiently precisely by their type.

What needs to happen for this to become real?

A bunch of things:

  • The LJT implementation is somewhat neat, but I probably did not implement backtracking properly, and there might be more bugs.
  • The implementation is very much unoptimized.
  • For this to be practically useful, the user needs to be able to use it with confidence. In particular, the user should be able to predict what code comes out. If there a multiple possible implementations, i.e. a clear specification which implementations are more desirable than others, and it should probably fail if there is ambiguity.
  • It ignores any recursive type, so it cannot do anything with lists. It would be much more useful if it could do some best-effort thing here as well.

If someone wants to pick it up from here, that’d be great!

I have seen this before…

Indeed, the idea is not new.

Most famously in the Haskell work is certainly Lennart Augustssons’s Djinn tool that creates Haskell source expression based on types. Alejandro Serrano has connected that to GHC in the library djinn-ghc, but I coudn’t use this because it was still outputting Haskell source terms (and it is easier to re-implement LJT rather than to implement type inference).

Lennart Spitzner’s exference is a much more sophisticated tool that also takes library API functions into account.

In the Scala world, Sergei Winitzki very recently presented the pretty neat curryhoward library that uses for Scala macros. He seems to have some good ideas about ordering solutions by likely desirability.

And in Idris, Joomy Korkut has created hezarfen.

by Joachim Breitner ( at February 02, 2018 07:01 PM

Ken T Takusagawa

[wisnmyni] Data as a number

Convert a number with possibly many leading zeroes in base M to base N, prefixing the base N output representation with leading zeroes in a way that unambigiously specifies the number of leading zeroes in base M input.  I think this is possible when M > N.  Some potentially useful conversions:

(M=16, N=10); (M=256, N=10); (M=256, N=100); (M=10; N=2)

The deeper idea is, whenever a string of characters represents raw data and not a word in a natural language, it should be encoded so that it is clear that the string is not meant to be interpreted as a word in natural language.  Unannotated hexadecimal fails this rule: if you see the string "deadbeef", is it a word, with a meaning perhaps related to food, or is it a number?  (Annotated hexadecimal, e.g., "0xdeadbeef" is clearly not a word.)  Simpler example: what does "a" mean, indefinite article or ten in hexadecimal?  English, and other orthographies which use Hindu-Arabic numerals, already have a character set which unambiguously state that a string encodes data and not words: the numerals.  (One could argue that numbers -- strings of Hindu-Arabic numerals -- have more meaning than strictly data: they have ordinality, they obey group and field axioms, etc.  However, we frequently see numbers which aren't that, e.g., serial numbers, phone numbers, ZIP codes, ID and credit card numbers.)

The inspiration was, expressing hashes in hexadecimal is sillyRadix conversion is quick and easy for a computer; it should be in decimal with leading zeroes if necessary.  If making the hash compact is a goal, then base 26 or base 95, with some annotation signifying it is encoded data, is better than hexadecimal.

Some Haskell code demonstrating some of the conversions.

by Ken ( at February 02, 2018 06:58 AM

February 01, 2018

Douglas M. Auclair (geophf)

January 2018 1HaskellADay Problems and Solutions

by geophf ( at February 01, 2018 03:21 AM

Sandy Maguire

Devlog: Action Menus, Timers and Hit Detection

<article> <header>

Devlog: Action Menus, Timers and Hit Detection


<time>February 1, 2018</time> devlog, neptune

The other day, I found myself working on the interaction subsystem of my game engine. I want the game to play like Monkey Island 3, which means you can click on the ground to walk there. You can also click and hold on an interactive piece of scenery in order to have a context-sensitive menu pop-up, from which you can choose how to interact with the object in question. If you’re not familiar with the genre, watching a few minutes of the video linked above should give you some idea of what I’m trying to build.

An adventure game in which you’re unable to interact with anything isn’t much of a game, and that’s where we left the engine. So it seemed like a thing to focus on next.

I knew that click/hold interaction that I wanted formed some sort of DFA, so I unwisely headed down that garden path for a bit. After implementing a bit, I found a state machine with the denotation of type DFA s e a = s -> e -> Either s a, where s is the state of the machine, e is the type of an edge transition, and a is the eventual output of the machine. Upon the final result, however, it became clear that I had fallen into an abstraction hole. I spent a bunch of time figuring out the implementation of this thing, and then afterwards realized it didn’t actually solve my problem. Whoops. Amateur Haskell mistake :)

The problem is that transitioning into some state might need to make a monadic action in order to generate the next edge. For example, when you press down on the mouse button, we need to start a timer which will open the action menu when it expires. This could be alleviated by changing Either to These and letting a ~ (Monad m => m b), but that struck me as a pretty ugly hack, and getting the implementation of the denotation to work again was yucky.

So I decided that instead maybe I should write a dumb version of what I wanted, and find out how to abstract it later if I should need similar machinery again in the future. I burned my DFA implementation in a fire.

This posed a problem, though, because if I wanted to write this for real I was going to need things to actually interact with, and I didn’t yet have those. I decided to put the interaction sprint on hold, in order to focus more on having things with which to interact.

One abstraction I think in terms of when working with adventure games is that of the hotspot. A hotspot is a mask on the background image which indicates a static piece of interesting geometry. For example, a window that never moves would be baked into the background image of the room, and then a hotspot would be masked on top of it to allow the character to interact with it.

For example, if our room looks like this (thanks to MI2 for the temporary art):

room background

room background

Then our mask image would look like this:

room mask

room mask

We can add some logic to be able to read the mask:

    :: Image PixelRGBA8
    -> (Word8 -> Bool)
    -> Hotspot
    -> Pos
    -> Maybe Hotspot
mkHotspot img f h = bool Nothing (Just h)
                  . f
                  . getHotspotByte
                  . uncurry (pixelAt img)
                  . (\(V2 x y) -> (x, y))
                  . clampToWorld
                  . fmap round
    clampToWorld = clamp (V2 0 0) $ imageSize img
    getHotspotByte (PixelRGBA8 _ g _ _) = g

and now bake the first three parameters of this function when we construct our level definition.

In order to test these things, I gave added a field _hsName :: Hotspot -> String in order to be able to test if my logic worked. The next step was to bind the click event to be able to call the Pos -> Maybe Hotspot that I curried out of mkHotspot and stuck into my Room datastructure (_hotspots :: Room -> Pos -> Maybe Hotspot).

I clicked around a bunch, and found that print . fmap _hsName $ _hotspots currentRoom mousePos lined up with the door when I clicked on it. It seemed to be working, so I considered my first yak shave successful: I now had something in the world that I could interact with.

The next step was to code up a little bit of the DFA I was originally working on. I decided that I should make the avatar walk to the place you clicked if it wasn’t a hotspot.

case event of
  MouseButton Down ->
    case _hotspots currentRoom mousePos of
      Just hs ->
        print $ _hsName hs

      Nothing ->
        when (isWalkable (_navmesh currentRoom) mousePos) $
          emap $ do
            with isAvatar
            pure defEntity'
              { pathing = Set $ NavTo mousePos

So: when the mouse is pressed, see if it was over top of a hotspot. If so, print out the name of it. Otherwise, check the navmesh of the room, and see if that’s a valid place to walk. If so, update any entity who has the isAvatar component and set its pathing component to be the location we want.

The engine at this point already has navigation primitives, which is why this works. We’ll discuss how the navmesh is generated and used in another devlog post.

I ran this code and played around with it for a while. Everything looked good – after I remembered to set isAvatar on my player entity :)

The next step was to implement timers that would have a callback, and could be started and stopped. I’d need support for these in order to wait a little bit before opening up the action menu. Thankfully, timers are super easy: just have an amount of time you decrement every frame until it hits zero, and then do the necessary action. I came up with this model for timers:

data Timer = Timer
  { _tTime     :: Time
  , _tCallback :: Game ()

data TimerType
  = TimerCoin
  deriving (Eq, Ord)

data GlobalState = GlobalState
  { ... -- other stuff
  , _timers :: Map TimerType Timer

A Timer is just an amount of remaining time and something to do afterwards. It’s stored in the GlobalState with a TimerType key. I originally thought about using a bigger type (such as Int) as my timer key, but realized that would make canceling specific timers harder as it would imply they’re given a non-deterministic key when started. The interface for starting and canceling timers turned out to be trivial:

startTimer :: TimerType -> Time -> Game () -> Game ()
startTimer tt t cb =
  setGlobals $ timers . at tt ?~ Timer t cb

cancelTimer :: TimerType -> Game ()
cancelTimer tt =
  setGlobals $ timers . at tt .~ Nothing

The only thing left is to update timers and run their callbacks when it’s time. I fucked around with this implementation too hard, trying to find a completely lensy way of doing it, but eventually settled on this ugly fromList . toList thing:

updateTimers :: Time -> Game ()
updateTimers dt = do
  ts  <- getGlobals $ view timers
  ts' <- forOf traverse ts $ \t ->
           if _tTime t - dt <= 0
             then _tCallback t $> Nothing
             else pure . Just
                       $ t & tTime -~ dt

  setGlobals $
    timers .~ M.fromList (catMaybes . fmap sequence $ M.toList ts')

ts' is a traversal over the Map of timers, that decrements each of their times, optionally runs their callbacks, then returns a Mayber Timer for each one. The last line is where the interesting bit is – sequence over a (TimerType, Maybe Timer) is a Maybe (TimerType, Timer), which we can then insert back into our Map as we construct it – essentially filtering out any timers which have expired.

Finally we can get back to our DFA. Instead of printing out the name of the hotspot you clicked on, we can now start a timer that will update our game state. I added a field to GlobalState:

data GlobalState = GlobalState
  { ... -- other stuff
  , _gInputDFA :: InputDFA

data InputDFA
  = IStart
  | IBeforeCoin
  | ICoinOpen Pos HotSpot
  deriving (Eq, Ord)

The idea is that we start in state IStart, transition into IBeforeCoin when we start the timer, and into ICoinOpen when the timer expires. Additionally, if the user releases the mouse button, we want to cancel the timer. All of this becomes:

case (_gInputDFA globalState, event) of
  (IStart, MouseButton Down) ->
    case _hotspots currentRoom mousePos of
      Just hs -> do
        startTimer TimerCoin 0.5 $ do
          setGlobals $ gInputDFA .~ ICoinOpen mousePos hs
        setGlobals $ gInputDFA .~ IBeforeCoin

      Nothing ->
        -- as before

  (IBeforeCoin, MouseButton Up) -> do
    cancelTimer TimerCoin
    setGlobals $ gInputDFA .~ IStart

  (ICoinOpen p hs, MouseButton Up) -> do
    let verb = getBBSurface (coinSurface p) mousePos
    for_ verb $ doInteraction hs
    setGlobals $ gInputDFA .~ IStart

If you care, try to trace through these cases and convince yourself that this logic is correct. The reason we have a position stored inside the ICoinOpen is so that we know where the mouse was when the user started holding their mouse down. This corresponds to where we should draw the action menu.

This is done in the drawing routine by checking the current state of _gInputDFA – if it’s ICoinOpen it means the menu is up and we need to draw it.

The only last thing is how can we map where you release your mouse button on the menu to what interaction we should do. Our action menu looks like this:

the action menu

the action menu

From left to right, these squares represent talking/eating, examining, and manipulating. We need some way of mapping a location on this image to a desired outcome.

Doing rectangle collision is easy enough – we define a bounding box and a test to see if a point is inside of it (as well as some auxiliary functions for constructing and moving BBs, elided here):

data BB = BB
  { leftX   :: Float
  , rightX  :: Float
  , topY    :: Float
  , bottomY :: Float
  } deriving (Eq, Ord, Show)

inBB :: BB -> Pos -> Bool
inBB BB{..} (V2 x y) = and
  [ x >= leftX
  , x <  rightX
  , y >= topY
  , y <  bottomY

rectBB :: Float -> Float -> BB
moveBB :: Pos -> BB -> BB

The final step is to somehow map these bounding boxes to things we want to return. This seems like it’ll be a recurring theme, so we build some machinery for it:

data BBSurface a = BBSurface [(BB, a)]
  deriving (Eq, Ord, Show)

getBBSurface :: BBSurface a -> Pos -> Maybe a
getBBSurface (BBSurface bs) p =
  getFirst . flip foldMap bs $ \(b, a) ->
    if inBB b p
       then First $ Just a
       else First $ Nothing

The abstraction is my amazingly-named BBSurface, which is a mapping of BBs to values of some type a. We can find a Maybe a on the BBSurface by just checking if the point is in any of the bounding boxes. If it is, we return the first value we find.

All that’s left is to construct one of these BBSurfaces for the coin, and then to move it to the position indicated inside the ICoinOpen. Easy as pie. Pulling everything together, and our interactive menu works as expected. Great success!

Next time we’ll talk about navigation. Thanks for reading!


February 01, 2018 12:00 AM

January 28, 2018

Gabriel Gonzalez

Dhall Survey Results (2017-2018)

<html xmlns=""><head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"/> <meta content="text/css" http-equiv="Content-Style-Type"/> <meta content="pandoc" name="generator"/> <style type="text/css">code{white-space: pre;}</style> <style type="text/css">div.sourceCode { overflow-x: auto; } table.sourceCode, tr.sourceCode, td.lineNumbers, td.sourceCode { margin: 0; padding: 0; vertical-align: baseline; border: none; } table.sourceCode { width: 100%; line-height: 100%; } td.lineNumbers { text-align: right; padding-right: 4px; padding-left: 4px; color: #aaaaaa; border-right: 1px solid #aaaaaa; } td.sourceCode { padding-left: 5px; } code > { color: #007020; font-weight: bold; } /* Keyword */ code > span.dt { color: #902000; } /* DataType */ code > span.dv { color: #40a070; } /* DecVal */ code > { color: #40a070; } /* BaseN */ code > span.fl { color: #40a070; } /* Float */ code > { color: #4070a0; } /* Char */ code > { color: #4070a0; } /* String */ code > { color: #60a0b0; font-style: italic; } /* Comment */ code > span.ot { color: #007020; } /* Other */ code > { color: #ff0000; font-weight: bold; } /* Alert */ code > span.fu { color: #06287e; } /* Function */ code > { color: #ff0000; font-weight: bold; } /* Error */ code > span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */ code > { color: #880000; } /* Constant */ code > { color: #4070a0; } /* SpecialChar */ code > span.vs { color: #4070a0; } /* VerbatimString */ code > { color: #bb6688; } /* SpecialString */ code > { } /* Import */ code > { color: #19177c; } /* Variable */ code > { color: #007020; font-weight: bold; } /* ControlFlow */ code > span.op { color: #666666; } /* Operator */ code > span.bu { } /* BuiltIn */ code > span.ex { } /* Extension */ code > span.pp { color: #bc7a00; } /* Preprocessor */ code > { color: #7d9029; } /* Attribute */ code > { color: #ba2121; font-style: italic; } /* Documentation */ code > { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */ code > { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */ code > { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */ </style></head><body>

I advertised a survey in my prior post to collect feedback on the Dhall configuration language. You can find the raw results here:

18 people completed the survey at the time of this writing and this post discusses their feedback.

This post assumes that you are familiar with the Dhall configuration language. If you are not familiar, then you might want to check out the project website to learn more:

"Which option describes how often you use Dhall"

  • "Never used it" - 7 (38.9%)
  • "Briefly tried it out" - 7 (38.9%)
  • "Use it for my personal projects" - 1 (5.6%)
  • "Use it at work" - 3 (16.7%)

The survey size was small and the selection process is biased towards people who follow me on Twitter or read my blog and are interested enough in Dhall to answer the survey. So I don't read too much into the percentages but I am interested in the absolute count of people who benefit from using Dhall (i.e. the last two categories).

That said, I was pretty happy that four people who took the survey benefit from the use of Dhall. These users provided great insight into how Dhall currently provides value in the wild.

I was also surprised by how many people took the survey that had never used Dhall. These people still gave valuable feedback on their first impressions of the language and use cases that still need to be addressed.

"Would anything encourage you to use Dhall more often?"

This was a freeform question designed to guide future improvements and to assess if there were technical issues holding people back from adopting the language.

Since there were only 18 responses, I can directly respond to most of them:

Mainly, a formal spec (also an ATS parsing implementation)

Backends for more languages would be great.

More compilation outputs, e.g. Java

The main thing is more time on my part (adoption as JSON-preprocessor and via Haskell high on my TODO), but now that you ask, a Python binding

Stable spec, ...

Julia bindings

Scala integration, ...

I totally agree with this feedback! Standardizing the language and providing more backends are the highest priorities for this year.

No language was mentioned more than once, but I will most likely target Java and Scala first for new backends.

Some way to find functions would be immensely helpful. The way to currently find functions is by word of mouth.

This is a great idea, so I created an issue on the issue tracker to record this suggestion:

I probably won't have time myself to implement this (at least over the next year), but if somebody is looking for a way to contribute to the Dhall ecosystem this would be an excellent way to do so.

I would use Dhall more often if it had better type synonym support.

...; ability to describe the whole config in one file

Fortunately, I recently standardized and implemented type synonym in the latest release of the language. For example, this is legal now:

    let Age = Natural

in let Name = Text

in let Person = { age : Age, name : Name }

in let John : Person = { age = +23, name = "John" }

in let Mary : Person = { age = +30, name = "Mary" }

in [ John, Mary ]

I assume the feedback on "ability to describe the whole config in one file" is referring a common work-around of importing repeated types from another file. The new type synonym support means you should now be able to describe the whole config in one file without any issues.

Verbosity for writing values in sum types was a no-go for our use case

...; more concise sum types.

The most recent release added support for the constructors keyword to simplify sum types. Here is an example of this new keyword in action:

    let Project =
< GitHub : { repository : Text, revision : Text }
| Hackage : { package : Text, version : Text }
| Local : { relativePath : Text }

in [ Project.GitHub
{ repository = ""
, revision = "ae5edf227b515b34c1cb6c89d9c58ea0eece12d5"
, Project.Local { relativePath = "~/proj/optparse-applicative" }
, Project.Local { relativePath = "~/proj/discrimination" }
, Project.Hackage { package = "lens", version = "4.15.4" }
, Project.GitHub
{ repository = ""
, revision = "ccbfabedea1cf5b38ff19f37549feaf01225e537"
, Project.Local { relativePath = "~/proj/servant-swagger" }
, Project.Hackage { package = "aeson", version = "" }

Missing a linter usable in editors for in-buffer error highlighting.

..., editor support, ...

Dhall does provide a Haskell API including support for structured error messages. These store source spans which can be used for highlighting errors, so this should be straightforward to implement. These source spans power Dhall's error messages, such as:

$ dhall <<< 'let x = 1 in Natural/even x'

Use "dhall --explain" for detailed errors

Error: Wrong type of function argument

Natural/even x


... the source span is how Dhall knows to highlight the Natural/even x subexpression at the bottom of the error message and that information can be programmatically retrieved from the Haskell API.

Sometimes have to work around record or Turing completeness limitations to generate desired json or yaml: often need record keys that are referenced from elsewhere as strings.

I will probably never allow Dhall to be Turing complete. Similarly, I will probably never permit treating text as anything other than an opaque value.

Usually the solution here is to upstream the fix (i.e. don't rely on weakly typed inputs). For example, instead of retrieving the field's name as text, retrieve a Dhall function to access the desired field, since functions are first-class values in Dhall.

If you have questions about how to integrate Dhall into your workflow, the current best place to ask is the issue tracker for the language. I'm not sure if this is the best place to host such questions and I'm open to suggestions for other forums for support.

Would like to be able to compute the sha1 or similar of a file or expression.

The Dhall package does provide a dhall-hash utility exactly for this purpose, which computes the semantic hash of any file or expression. For example:

$ dhall-hash <<< 'λ(x : Integer) → [1, x, 3]'

Better syntax for optional values. It is abysymmal now. But perhaps I'm using dhall wrong. My colleague suggests merging into default values.

Dropping the lets

simpler syntax; ...

There were several complaints about the syntax. The best place to discuss and/or propose syntax improvements is the issue tracker for the language standard.

For example:

Better error messages; ...

The issue tracker for the Haskell implementation is the appropriate place to request improvements to error messages. Here are some recently examples of reported issues in error messages:

I worry that contributors/friends who'd like to work on a project with me would be intimidated because they don't know the lambda calculus

Dhall will always support anonymous functions, so I doubt the language will be able to serve those who shy away from lambda calculus. This is probably where something like JSON would be a more appropriate choice.

..., type inference

..., full type inference, row polymorphism for records

This is probably referring to Haskell/PureScript/Elm-style inference where the types of anonymous functions can be inferred (i.e. using unification). I won't rule out supporting type inference in the future but I doubt it will happen this year. However, I welcome any formal proposal to add type inference to the language now that the type-checking semantics have been standardized.

It's cool, but I'm yet to be convinced it will make my life easier

No problem! Dhall does not have a mission to "take over the world" or displace other configuration file formats.

Dhall's only mission is to serve people who want programmable configuration files without sacrificing safety or ease of maintenance.

"What do you use Dhall for?"

"Why do you use Dhall?"

This was the part of the survey that I look forward to the most: seeing how people benefit from Dhall. These were two freeform questions that I'm combining in one section:

WHAT: HTML templating, providing better manifest files (elm-package.json)

WHY: Dhall as a template language allows us to express all of the inputs to a template in one place. We don't have to hunt-and-peck to find out what a template is parameterized by. And if we want to understand what some variable is in the template, we can just look at the top of the file where its type is declared. We want exact dependencies for our elm code. There is no specific syntax for exact versions in elm-package.json. There is a hack around that limitation which requires enforcement by convention (or CI scripts or some such). We use Dhall to generate exact versions by construction. The JSON that's generated uses the hack around the limitation, but we don't have to think about that accidental complexity.

WHAT: Ad hoc command-line tools

WHAT: Generating docker swarm and other configs from common templates with specific variables inserted safely.

WHY: JSON and yaml suck, want to know about errors at compile time.

WHAT: Kubernetes

WHY: Schema validation. Kubernetes is known for blowing up in your face at runtime by doing evaluation and parsing at the same time

WHAT: Config

WHY: Strong typing & schema

WHY: Rich functinality and concise syntax

WHAT: JSON preprocessor

WHY: JSON is universal but a bit dumb

WHAT: configuration - use haskell integration and spit out json/yaml on some projects (to kubernetes configs for instance)

WHY: I love pure fp and dealing with configs wastes a lot of my time and is often error prone

Interestingly, ops-related applications like Docker Swarm and Kubernetes were mentioned in three separate responses. I've also received feedback (outside the survey) from people interested in using Dhall to generate Terraform and Cloudformation templates, too.

This makes sense because configuration files for operational management tend to be large and repetitive, which benefits from Dhall's support for functions and let expressions to reduce repetition.

Also, reliability matters for operations engineer, so being able to validate the correctness of a configuration file ahead of time using a strong type system can help reduce outages. As Dan Luu notes:

Configuration bugs, not code bugs, are the most common cause I’ve seen of really bad outages. When I looked at publicly available postmortems, searching for “global outage postmortem” returned about 50% outages caused by configuration changes. Publicly available postmortems aren’t a representative sample of all outages, but a random sampling of postmortem databases also reveals that config changes are responsible for a disproportionate fraction of extremely bad outages. As with error handling, I’m often told that it’s obvious that config changes are scary, but it’s not so obvious that most companies test and stage config changes like they do code changes.

Also, type-checking a Dhall configuration file is a faster and more convenient to valid correctness ahead-of-time than an integration test.

Functions and types were the same two justifications for my company (Awake Security) using Dhall internally. We had large and repetitive configurations for managing a cluster of appliances that we wanted to simplify and validate ahead of time to reduce failures in the field.

Another thing that stood out was how many users rely on Dhall's JSON integration. I already suspected this but this survey solidified that.

I also learned that one user was using Dhall not just to reduce repetition, but to enforce internal policy (i.e. a version pinning convention for elm-package.json files). This was a use case I hadn't thought of marketing before.

Other feedback

We tried it as an option at work for defining some APIs but describing values in sum types was way too verbose (though I think I understand why it might be needed). We really like the promise of a strongly typed config language but it was pretty much unusable for us without building other tooling to potentially improve ergonomics.

As mentioned above, this is fixed in the latest release with the addition of the constructors keyword.

Last time I checked, I found syntax a bit awkward. Hey, Wadler's law :) Anyway, great job! I'm not using Dhall, but I try to follow its development, because it's different from everything else I've seen.

Syntax concerns strike again! This makes me suspect that this will be a recurring complaint (similar to Haskell and Rust).

I also appreciate that people are following the language and hopefully getting ideas for their own languages and tools. The one thing I'd really like more tools to steal is Dhall's import system.

You rock. Thanks for all your work for the community.

Dhall is awesome!


I love the project and am grateful for all the work that's been done to make it accessible and well documented. Keep up the good work!

Thank you all for the support, and don't forget to thank other people who have contributed to the project by reporting issues and implementing new features.


by Gabriel Gonzalez ( at January 28, 2018 05:05 PM