# Compile and link a Haskell package against a local C library

Let’s say you want to build a Haskell package with a locally built version of a C library for testing/debugging purposes. Doing this is easy once you know the right option names, but finding this information took me some time, so I’m recording it here for the reference.

Let’s say the headers of your local library are in /home/user/src/mylib/include and the library files (*.so or *.a) are in /home/user/src/mylib/lib. Then you can put the following into your stack.yaml (tested with stack v2.2.0; instructions for cabal-install should be similar):

extra-include-dirs:
- /home/user/src/mylib/include
extra-lib-dirs:
- /home/user/src/mylib/lib
ghc-options:
"$locals": -optl=-Wl,-rpath,/home/user/src/mylib/lib Here "$locals" means “apply the options to all local packages”.

# Purely Functional GTK+, Part 2: TodoMVC

In the last episode we built a "Hello, World" application using gi-gtk-declarative. It's now time to convert it into a to-do list application, in the style of TodoMVC.

To convert the “Hello, World!” application to a to-do list application, we begin by adjusting our data types. The Todo data type represents a single item, with a Text field for its name. We also need to import the Text type from Data.Text.

data Todo = Todo
{ name :: Text
}

Our state will no longer be (), but a data types holding Vector of Todo items. This means we also need to import Vector from Data.Vector.

data State = State
{ todos :: Vector Todo
}

As the run function returns the last state value of the state reducer loop, we need to discard that return value in main. We wrap the run action in void, imported from Control.Monad.

Let’s rewrite our view function. We change the title to “TodoGTK+” and replace the label with a todoList, which we’ll define in a where binding. We use container to declare a Gtk.Box, with vertical orientation, containing all the to-do items. Using fmap and a typed hole, we see that we need a function Todo -> BoxChild Event.

view' :: State -> AppView Gtk.Window Event
view' s = bin
Gtk.Window
[#title := "TodoGTK+", on #deleteEvent (const (True, Closed))]
todoList
where
todoList = container Gtk.Box
[#orientation := Gtk.OrientationVertical]
(fmap _ (todos s))

The todoItem will render a Todo value as a Gtk.Label displaying the name.

view' :: State -> AppView Gtk.Window Event
view' s = bin
Gtk.Window
[#title := "TodoGTK+", on #deleteEvent (const (True, Closed))]
todoList
where
todoList = container Gtk.Box
[#orientation := Gtk.OrientationVertical]
(fmap todoItem (todos s))
todoItem todo = widget Gtk.Label [#label := name todo]

Now, GHC tells us there’s a “non-type variable argument in the constraint”. The type of todoList requires us to add the FlexibleContexts language extension.

{-# LANGUAGE FlexibleContexts  #-}
module Main where

The remaining type error is in the definition of main, where the initial state cannot be a () value. We construct a State value with an empty vector.

main :: IO ()
main = void $run App { view = view' , update = update' , inputs = [] , initialState = State {todos = mempty} } ### Adding New To-Do Items While our application type-checks and runs, there are no to-do items to display, and there’s no way of adding new ones. We need to implement a form, where the user inserts text and hits the Enter key to add a new to-do item. To represent these events, we’ll add two new constructors to our Event type. data Event = TodoTextChanged Text | TodoSubmitted | Closed TodoTextChanged will be emitted each time the text in the form changes, carrying the current text value. The TodoSubmitted event will be emitted when the user hits Enter. When the to-do item is submitted, we need to know the current text to use, so we add a currentText field to the state type. data State = State { todos :: Vector Todo , currentText :: Text } We modify the initialState value to include an empty Text value. main :: IO () main = void$ run App
{ view         = view'
, update       = update'
, inputs       = []
, initialState = State {todos = mempty, currentText = mempty}
}

Now, let’s add the form. We wrap our todoList in a vertical box, containing the todoList and a newTodoForm widget.

view' :: State -> AppView Gtk.Window Event
view' s = bin
Gtk.Window
[#title := "TodoGTK+", on #deleteEvent (const (True, Closed))]
(container Gtk.Box
[#orientation := Gtk.OrientationVertical]
[todoList, newTodoForm]
)
where
...

The form consists of a Gtk.Entry widget, with the currentText of our state as its text value. The placeholder text will be shown when the entry isn’t focused. We use onM to attach an effectful event handler to the changed signal.

view' :: State -> AppView Gtk.Window Event
view' s = bin
Gtk.Window
[#title := "TodoGTK+", on #deleteEvent (const (True, Closed))]
(container Gtk.Box
[#orientation := Gtk.OrientationVertical]
[todoList, newTodoForm]
)
where
...
newTodoForm = widget
Gtk.Entry
[ #text := currentText s
, #placeholderText := "What needs to be done?"
, onM #changed _
]

The typed hole tells us we need a function Gtk.Entry -> IO Event. The reason we use onM is to have that IO action returning the event, instead of having a pure function. We need it to query the underlying GTK+ widget for it’s current text value. By using entryGetText, and mapping our event constructor over that IO action, we get a function of the correct type.

    ...
newTodoForm = widget
Gtk.Entry
[ #text := currentText s
, #placeholderText := "What needs to be done?"
, onM #changed (fmap TodoTextChanged . Gtk.entryGetText)
]

It is often necessary to use onM and effectful GTK+ operations in event handlers, as the callback type signatures rarely have enough information in their arguments. But for the next event, TodoSubmitted, we don’t need any more information, and we can use on to declare a pure event handler for the activated signal.

    ...
newTodoForm = widget
Gtk.Entry
[ #text := currentText s
, #placeholderText := "What needs to be done?"
, onM #changed (fmap TodoTextChanged . Gtk.entryGetText)
, on #activate TodoSubmitted
]

Moving to the next warning, we see that the update' function is no longer total. We are missing cases for our new events. Let’s give the arguments names and pattern match on the event. The case for Closed will be the same as before.

update' :: State -> Event -> Transition State Event
update' s e = case e of
Closed -> Exit

When the to-do text value changes, we’ll update the currentText state using a Transition. The first argument is the new state, and the second argument is an action of type IO (Maybe Event). We don’t want to emit any new event, so we use (pure Nothing).

update' :: State -> Event -> Transition State Event
update' s e = case e of
TodoTextChanged t -> Transition s { currentText = t } (pure Nothing)
Closed -> Exit

For the TodoSubmitted event, we define a newTodo value with the currentText as its name, and transition to a new state with the newTodo item appended to the todos vector. We also reset the currentText to be empty.

To use Vector.snoc, we need to add a qualified import.

import           Control.Monad                 (void)
import           Data.Text                     (Text)
import           Data.Vector                   (Vector)
import qualified Data.Vector                   as Vector
import qualified GI.Gtk                        as Gtk
import           GI.Gtk.Declarative
import           GI.Gtk.Declarative.App.Simple

Running the application, we can start adding to-do items.

## Improving the Layout

Our application doesn’t look very good yet, so let’s improve the layout a bit. We’ll begin by left-aligning the to-do items.

todoItem i todo =
widget
Gtk.Label
[#label := name todo, #halign := Gtk.AlignStart]

To push the form down to the bottom of the window, we’ll wrap the todoList in a BoxChild, and override the defaultBoxChildProperties to have the child widget expand and fill all the available space of the box.

todoList =
BoxChild defaultBoxChildProperties { expand = True, fill = True }
container Gtk.Box [#orientation := Gtk.OrientationVertical] (fmap todoItem (todos s)) We re-run the application, and see it has a nicer layout. ## Completing To-Do Items There’s one very important missing: being able to mark a to-do item as completed. We add a Bool field called completed to the Todo data type. data Todo = Todo { name :: Text , completed :: Bool } When creating new items, we set it to False. update' :: State -> Event -> Transition State Event update' s e = case e of ... TodoSubmitted -> let newTodo = Todo {name = currentText s, completed = False} in Transition s { todos = todos s Vector.snoc newTodo, currentText = mempty } (pure Nothing) ... Instead of simply rendering the name, we’ll use strike-through markup if the item is completed. We define completedMarkup, and using guards we’ll either render the new markup or render the plain name. To make it strike-through, we wrap the text value in <s> tags. widget Gtk.Label [ #label := completedMarkup todo , #halign := Gtk.AlignStart ] where completedMarkup todo | completed todo = "<s>" <> name todo <> "</s>" | otherwise = name todo For this to work, we need to enable markup for the label be setting #useMarkup to True. widget Gtk.Label [ #label := completedMarkup todo , #useMarkup := True , #halign := Gtk.AlignStart ] where completedMarkup todo | completed todo = "<s>" <> name todo <> "</s>" | otherwise = name todo In order for the user to be able to toggle the completed status, we wrap the label in a Gtk.CheckButton bin. The #active property will be set to the current completed status of the Todo value. When the check button is toggled, we want to emit a new event called TodoToggled. todoItem todo = bin Gtk.CheckButton [#active := completed todo, on #toggled (TodoToggled i)] widget
Gtk.Label
[ #label := completedMarkup todo
, #useMarkup := True
, #halign := Gtk.AlignStart
]

Let’s add the new constructor to the Event data type. It will carry the index of the to-do item.

data Event
= TodoTextChanged Text
| TodoSubmitted
| TodoToggled Int
| Closed

To get the corresponding index of each Todo value, we’ll iterate using Vector.imap instead of using fmap.

    todoList =
BoxChild defaultBoxChildProperties { expand = True, fill = True }
$container Gtk.Box [#orientation := Gtk.OrientationVertical] (Vector.imap todoItem (todos s)) todoItem i todo = ... The pattern match on events in the update' function is now missing a case for the new event constructor. Again, we’ll do a transition where we update the todos somehow. update' :: State -> Event -> Transition State Event update' s e = case e of ... TodoToggled i -> Transition s { todos = _ (todos s) } (pure Nothing) ... We need a function Vector Todo -> Vector Todo that modifies the value at the index i. There’s no handy function like that available in the vector package, so we’ll create our own. Let’s call it mapAt. update' :: State -> Event -> Transition State Event update' s e = case e of ... TodoToggled i -> Transition s { todos = mapAt i _ (todos s) } (pure Nothing) ... It will take as arguments the index, a mapping function, and a Vector a, and return a Vector a. mapAt :: Int -> (a -> a) -> Vector a -> Vector a We implement it using Vector.modify, and actions on the mutable representation of the vector. We overwrite the value at i with the result of mapping f over the existing value at i. mapAt :: Int -> (a -> a) -> Vector a -> Vector a mapAt i f = Vector.modify (\v -> MVector.write v i . f =<< MVector.read v i) To use mutable vector operations through the MVector name, we add the qualified import. import qualified Data.Vector.Mutable as MVector Finally, we implement the function to map, called toggleComplete. toggleCompleted :: Todo -> Todo toggleCompleted todo = todo { completed = not (completed todo) } update' :: State -> Event -> Transition State Event update' s e = case e of ... TodoToggled i -> Transition s { todos = mapAt i toggleComplete (todos s) } (pure Nothing) ... Now, we run our application, add some to-do items, and mark or unmark them as completed. We’re done! ## Learning More Building our to-do list application, we have learned the basics of gi-gtk-declarative and the “App.Simple” architecture. There’s more to learn, though, and I recommend checking out the project documentation. There are also a bunch of examples in the Git repository. Please note that this project is very young, and that APIs are not necessarily stable yet. I think, however, that it’s a much nicer way to build GTK+ applications using Haskell than the underlying APIs provided by the auto-generated bindings. Now, have fun building your own functional GTK+ applications! ## December 28, 2018 ### Oskar Wickström # Why I'm No Longer Taking Donations Haskell at Work, the screencast focused on Haskell in practice, is approaching its one year birthday. Today, I decided to stop taking donations through Patreon due to the negative stress I’ve been experiencing. ## The Beginning This journey started in January 2018. Having a wave of inspiration after watching some of Gary Bernhardt’s new videos, I decided to try making my own videos about practical Haskell programming. Not only producing high-quality content, but with high video and audio quality, was the goal. Haskell at Work was born, and the first video was surprisingly well-received by followers on Twitter. With the subsequent episodes being published in rapid succession, a follower base on YouTube grew quickly. A thousand or so followers might not be exceptional for a programming screencast channel on YouTube, but to me this was exciting and unexpected. To be honest, Haskell is not exactly a mainstream programming language. Early on, encouraged by some followers, and being eager to develop the concept, I decided to set up Patreon as a way of people to donate to Haskell at Work. Much like the follower count, the number of patrons and their monthly donations grew rapidly, beyond any hopes I had. ## Fatigue Kicks In The majority of screencasts were published between January and May. Then came the summer and my month-long vacation, in which I attended ZuriHac and spent three weeks in Bali with my wife and friends. Also, I had started getting side-tracked by my project to build a screencast video editor in Haskell. Working on Komposition also spawned the Haskell package gi-gtk-declarative, and my focus got swept away from screencasts. In all fairness, I’m not great at consistently doing one thing for an extended period. My creativity and energy comes in bursts, and it may not strike where and when I hope. Maybe this can be managed or controlled somehow, but I don’t know how. With the lower publishing pace over the summer, a vicious circle of anxiety and low productivity grew. I had thoughts about shutting down the Patreon back then, but decided to instead pause it for a few months. ## Regaining Energy By October, I had recovered some energy. I got very good feedback and lots of encouragement from people at Haskell eXchange, and decided to throw myself back into the game. I published one screencast in November, but something was still there nagging me. I felt pressure and guilt. That I had not delivered on the promise given. By this time, the Patreon donations had covered my recording equipment expenses, hosting costs over the year, and a few programming books I bought. The donations were still coming in, however, at around$160 per month, with me producing no obvious value for the patrons. The guilt was still there, even stronger than before.

I’m certain that this is all in my head. I do not blame any supporter for these feelings. You have all been great! With all the words of caution you hear about not reading the comments, having a YouTube channel filled with positive feedback, and almost exclusively thumbs-up ratings, I’m beyond thankful for the support I have received.

## Trying Something Else

After Christmas this year, I had planned to record and publish a new screencast. Various personal events got in the way, though, and I had very little time to spend on working with it, resulting in the same kind of stress. I took a step back and thought about it carefully, and I’ve realized that money is not a good driver for the free material and open-source code work that I do, and that it’s time for a change.

I want to make screencasts because I love doing it, and I will do so when I have time and energy.

From the remaining funds in my PayPal account, I have allocated enough to keep the domain name and hosting costs covered for another year, and I have donated the remaining amount (USD 450) to Haskell.org.

Please keep giving me feedback and suggestions for future episodes. Your ideas are great! I’m looking forward to making more Haskell at Work videos in the future, and I’m toying around with ideas on how to bring in guests, and possibly trying out new formats. Stay tuned, and thank you all for your support!

# Maybe

There are different approaches to the issue of not having a value to return. One idiom to deal with this in C++ is the use of boost::optional<T> or std::pair<bool, T>.
class boost::optional<T> //Discriminated-union wrapper for values.

Maybe is a polymorphic sum type with two constructors : Nothing or Just a.
Here's how Maybe is defined in Haskell.
  {- The Maybe type encapsulates an optional value. A value of type  Maybe a either contains a value of type a (represented as Just a), or  it is empty (represented as Nothing). Using Maybe is a good way to  deal with errors or exceptional cases without resorting to drastic  measures such as error.  The Maybe type is also a monad.  It is a simple kind of error monad, where all errors are  represented by Nothing. -}  data Maybe a = Nothing | Just a  {- The maybe function takes a default value, a function, and a Maybe  value. If the Maybe value is Nothing, the function returns the default  value. Otherwise, it applies the function to the value inside the Just  and returns the result. -}  maybe :: b -> (a -> b) -> Maybe a -> b  maybe n _ Nothing  = n  maybe _ f (Just x) = f x

I haven't tried to compile the following OCaml yet but I think it should be roughly OK.
 type 'a option = None | Some of 'a  ;;  let maybe n f a =    match a with      | None -> n      | Some x -> f x      ;;

Here's another variant on the Maybe monad this time in Felix. It is applied to the problem of "safe arithmetic" i.e. the usual integer arithmetic but with guards against under/overflow and division by zero.
  union success[T] =    | Success of T    | Failure of string    ;  fun str[T] (x:success[T]) =>    match x with      | Success ?t => "Success " + str(t)      | Failure ?s => "Failure " + s    endmatch    ;  typedef fun Fallible (t:TYPE) : TYPE => success[t] ;  instance Monad[Fallible]  {    fun bind[a, b] (x:Fallible a, f: a -> Fallible b) =>      match x with        | Success ?a => f a        | Failure[a] ?s => Failure[b] s      endmatch      ;    fun ret[a](x:a):Fallible a => Success x ;  }  //Safe arithmetic.  const INT_MAX:int requires Cxx_headers::cstdlib ;  const INT_MIN:int requires Cxx_headers::cstdlib ;  fun madd (x:int) (y:int) : success[int] =>    if x > 0 and y > (INT_MAX - x) then        Failure[int] "overflow"    else      Success (y + x)    endif    ;  fun msub (x:int) (y:int) : success[int] =>    if x > 0 and y < (INT_MIN + x) then      Failure[int] "underflow"    else      Success (y - x)    endif    ;  fun mmul (x:int) (y:int) : success[int] =>    if x != 0 and y > (INT_MAX / x) then      Failure[int] "overflow"    else      Success (y * x)    endif    ;  fun mdiv (x:int) (y:int) : success[int] =>      if (x == 0) then          Failure[int] "attempted division by zero"      else        Success (y / x)      endif      ;  //--  //  //Test.  open Monad[Fallible] ;  //Evalue some simple expressions.  val zero = ret 0 ;  val zero_over_one = bind ((Success 0), (mdiv 1)) ;  val undefined = bind ((Success 1),(mdiv 0)) ;  val two = bind((ret 1), (madd 1)) ;  val two_by_one_plus_one = bind (two , (mmul 2)) ;  println$"zero = " + str zero ; println$ "1 / 0 = " + str undefined ;  println$"0 / 1 = " + str zero_over_one ; println$ "1 + 1 = " + str two ;  println$"2 * (1 + 1) = " + str (bind (bind((ret 1), (madd 1)) , (mmul 2))) ; println$ "INT_MAX - 1 = " + str (bind ((ret INT_MAX), (msub 1))) ;  println$"INT_MAX + 1 = " + str (bind ((ret INT_MAX), (madd 1))) ; println$ "INT_MIN - 1 = " + str (bind ((ret INT_MIN), (msub 1))) ;  println$"INT_MIN + 1 = " + str (bind ((ret INT_MIN), (madd 1))) ; println$ "--" ;  //We do it again, this time using the "traditional" rshift-assign  //syntax.  syntax monad //Override the right shift assignment operator.  {    x[ssetunion_pri] := x[ssetunion_pri] ">>=" x[>ssetunion_pri] =># "(ast_apply ,_sr (bind (,_1 ,_3)))";  }  open syntax monad;  println$"zero = " + str (ret 0) ; println$ "1 / 0 = " + str (ret 1 >>= mdiv 0) ;  println$"0 / 1 = " + str (ret 0 >>= mdiv 1) ; println$ "1 + 1 = " + str (ret 1 >>= madd 1) ;  println$"2 * (1 + 1) = " + str (ret 1 >>= madd 1 >>= mmul 2) ; println$ "INT_MAX = " + str (INT_MAX) ;  println$"INT_MAX - 1 = " + str (ret INT_MAX >>= msub 1) ; println$ "INT_MAX + 1 = " + str (ret INT_MAX >>= madd 1) ;  println$"INT_MIN = " + str (INT_MIN) ; println$ "INT_MIN - 1 = " + str (ret INT_MIN >>= msub 1) ;  println$"INT_MIN + 1 = " + str (ret INT_MIN >>= madd 1) ; println$ "2 * (INT_MAX/2) = " + str (ret INT_MAX >>= mdiv 2 >>= mmul 2 >>= madd 1) ; //The last one since we know INT_MAX is odd and that division will truncate.  println$"2 * (INT_MAX/2 + 1) = " + str (ret INT_MAX >>= mdiv 2 >>= madd 1 >>= mmul 2) ; //-- That last block using the <<= syntax produces (in part) the following output (the last two print statments have been truncated away -- the very last one produces an expected overflow). ## April 07, 2020 ### Mark Jason Dominus # Fern motif experts on the Internet I live near Woodlands Cemetery and by far the largest monument there, a thirty-foot obelisk, belongs to Thomas W. Evans, who is an interesting person. In his life he was a world-famous dentist, whose clients included many crowned heads of Europe. He was born in Philadelphia, and land to the University of Pennsylvania to found a dental school, which to this day is located at the site of Evans’ former family home at 40th and Spruce Street. A few days ago my family went to visit the cemetery and I insisted on visting the Evans memorial. The obelisk has this interesting ornament: The thing around the middle is evidently a wreath of pine branches, but what is the thing in the middle? Some sort of leaf, or frond perhaps? Or is it a feather? If Evans had been a writer I would have assumed it was a quill pen, but he was a dentist. Thanks to the Wonders of the Internet, I was able to find out. First I took the question to Reddit's /r/whatisthisthing forum. Reddit didn't have the answer, but Reddit user @hangeryyy had something better: they observed that there was a fad for fern decorations, called pteridomania, in the second half of the 19th century. Maybe the thing was a fern.  Order Fern Fever: The Story of Pteridomania from Powell's I was nerdsniped by pteridomania and found out that a book on pteridomania had been written by Dr. Sarah Whittingham, who goes by the encouraging Twitter name of @DrFrond. Dr. Whittingham's opinion is that this is not a fern frond, but a palm frond. The question has been answered to my full and complete satisfaction. My thanks to Dr. Whittingham, @hangeryyy, and the /r/whatisthisthing community. ### Brent Yorgey # Data structure challenge: application I forgot to mention this in my previous post, but the thing which got me thinking about the predecessor problem in the first place was this competitive programming problem on Open Kattis: I challenge you to go and solve it using your favorite technique from the previous post. (The connection between this Kattis problem and the predecessor problem is not immediately obvious, but I have to leave you something to puzzle over!) ## April 06, 2020 ### Joachim Breitner # A Telegram bot in Haskell on Amazon Lambda I just had a weekend full of very successful serious geekery. On a whim I thought: “Wouldn't it be nice if people could interact with my game Kaleidogen also via a telegram bot?” This led me to learn about how I write a Telegram bot in Haskell and how I can deploy such a Haskell program to Amazon Lambda. In particular the latter bit might be interesting to some of my readers, so here is how went about it. ## Kaleidogen Kaleidogen is a little contemplative game (or toy game where, starting from just unicolored disks, you combine abstract circular patterns to breed more interesting patterns. See my FARM 2019 talk for more details, or check out the source repository. BTW, I am looking for help turning it into an Android app! ## Amazon Lambda Amazon Lambda is the “Function as a service” offering of Amazon Web Services. The idea is that you don’t rent a server, where you have to deal with managing the whole system and that you are paying for constantly, but you just upload the code that responds to outside requests, and AWS takes care of the rest: Starting and stopping instances, providing a secure base system etc. When nobody is using the service, no cost occurs. This sounds ideal for hosting a toy Telegram bot: Most of the time nobody will be using it, and I really don't want to have to baby sit yet another service on my server. On Amazon Lambda, I can probably just forget about it. But Haskell is not one of the officially supported languages on Amazon Lambda. So to run Haskell on Lambda, one has to solve two problems: • how to invoke the Haskell code on the server, and • how to build Haskell so that it runs on the Amazon Linux distribution ## A Haskell runtime for Lambda For the first we need a custom runtime. While this sounds complicated, it is actually a pretty simple concept: A runtime is an executable called bootstrap that queries the Lambda Runtime Interface for the next request to handle. The Lambda documentation is phrased as if this runtime has to be a dispatcher that calls the separate function’s handler. But it could just do all the things directly. I found the Haskell package aws-lambda-haskell-runtime which provides precisely that: A function runLambda :: (LambdaOptions -> IO (Either String LambdaResult)) -> IO () that talks to the Lambda Runtime API and invokes its argument on each message. The package also provides Template Haskell magic to collect “handlers“ of any JSON-able type and generates a dispatcher, like you might expect from other, more dynamic languages. But that was too much magic for me, so I ignored that and just wrote the handler manually: main :: IO () main = runLambda (run tc) where run :: LambdaOptions -> IO (Either String LambdaResult) run opts = do result <- handler (decodeObj (eventObject opts)) (decodeObj (contextObject opts)) either (pure . Left . encodeObj) (pure . Right . LambdaResult . encodeObj) result data Event = Event { path :: T.Text , body :: Maybe T.Text } deriving (Generic, FromJSON) data Response = Response { statusCode :: Int , headers :: Value , body :: T.Text , isBase64Encoded :: Bool } deriving (Generic, ToJSON) handler :: TC -> Event -> Context -> IO (Either String Response) handler tc Event{body, path} context = … I expose my Lambda function to the world via Amazon’s API Gateway, configured to just proxy the HTTP requests. This means that my code receives a JSON data structure describing the HTTP request (here called Event, listing only the fields I care about), and it will respond with a Response, again as JSON. The handler can then simply pattern-match on the path to decide what to do. For example this code handles URLs like /img/CAFFEEFACE.png, and responds with an image. handler :: TC -> Event -> Context -> IO (Either String Response) handler tc Event{body, path} context | Just bytes <- isImgPath path >>= T.decodeHex = do let pngData = genPurePNG bytes pure$ Right Response
{ statusCode = 200
, headers = object [ "Content-Type" .= ("image/png" :: String) ]
, isBase64Encoded = True
, body = T.decodeUtf8 $LBS.toStrict$ Base64.encode pngData
}
…

isImgPath :: T.Text -> Maybe T.Text
isImgPath  = T.stripPrefix "/img/" >=> T.stripSuffix ".png"

If this program would grow more, then one should probably use something more structured for routing here; maybe servant, or bridging towards wai apps (amost like wai-lamda, but that still assumes an existing runtime, instead of simply being the runtime). But for my purposes, no extra layers of indirection or abstraction are needed!

Building Haskell locally and deploying to different machiens is notoriously tricky; you often end up depending on a shared library that is not available on the other platform. The aws-lambda-haskell-runtime package, and similar projects like serverless-haskell, solve this using stack and Docker – two technologies that are probably great, but I never warmed up to them.

So instead adding layers and complexities, can I solve this instead my making things simpler? If I compiler my bootstrap into a static Linux binary, it should run on any Linux, including Amazon Linux.

Unfortunately, building Haskell programs statically is also notoriously tricky. But it is made much simpler by the work of Niklas Hambüchen and others in the context of the Nix package manager, coordinated in the static-haskell-nix project. The promise here is that once you have set up building your project with Nix, then getting a static version is just one flag away. The support is not completely upstreamed into nixpkgs proper yet, but their repository has a nix file that contains a nixpkgs set with their patches:

let pkgs = (import (sources.nixpkgs-static + "/survey/default.nix") {}).pkgs; in

This, plus a fairly standard nix setup to build the package, yields what I was hoping for:

$nix-build -A kaleidogen /nix/store/ppwyq4d964ahd6k56wsklh93vzw07ln0-kaleidogen-0.1.0.0$ file result/bin/kaleidogen-amazon-lambda
result/bin/kaleidogen-amazon-lambda: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, stripped
$ls -sh result/bin/kaleidogen-amazon-lambda 6,7M result/bin/kaleidogen-amazon-lambda If we put this file, named bootstrap, into a zip file and upload it to Amazon Lambda, then it just works! Creating the zip file is easily scripted using nix:  function-zip = pkgs.runCommandNoCC "kaleidogen-lambda" { buildInputs = [ pkgs.zip ]; } '' mkdir -p$out
cp ${kaleidogen}/bin/kaleidogen-amazon-lambda bootstrap zip$out/function.zip bootstrap
'';

So to upload this, I use this one-liner (line-wrapped for your convenience):

nix-build -A function-zip &&
aws lambda update-function-code --function-name kaleidogen \
--zip-file fileb://result/function.zip

Thanks to how Nix pins all dependencies, I am fairly confident that I can return to this project in 4 months and still be able to build it.

Of course, I want continuous integration and deployment. So I build the project with GitHub Actions, using a cachix nix cache to significantly speed up the build, and auto-deploy to Lambda using aws-lambda-deploy; see my workflow file for details.

## The Telegram part

The above allows me to run basically any stateless service, and a Telegram bot is nothing else: When configured to act as a WebHook, Telegram will send a request with a message to our Lambda function, where we can react on it.

The telegram-api package provides bindigs for the Telegram Bot API (although I had to use the repository version, as the version on Hackage has some bitrot). Slightly simplified I can write a handler for an Update:

handleUpdate :: Update -> TelegramClient ()
handleUpdate Update{ message = Just m } = do
let c = ChatId (chat_id (chat m))
liftIO $printf "message from %s: %s\n" (maybe "?" user_first_name (from m)) (maybe "" T.unpack (text m)) if "/start" T.isPrefixOf fromMaybe "" (text m) then do rm <- sendMessageM$ sendMessageRequest c "Hi! I am @KaleidogenBot. …"
return ()
else do
m1 <- sendMessageM $sendMessageRequest c "One moment…" withPNGFile$ \pngFN -> do
m2 <- uploadPhotoM $uploadPhotoRequest c (FileUpload (Just "image/png") (FileUploadFile pngFN)) return () handleUpdate _ u = liftIO$ putStrLn $"Unhandled message: " ++ show u and call this from the handler that I wrote above:  … | path == "/telegram" = case eitherDecode (LBS.fromStrict (T.encodeUtf8 (fromMaybe "" body))) of Left err -> … Right update -> do runTelegramClient token manager$ handleUpdate Nothing update
pure $Right Response { statusCode = 200 , headers = object [ "Content-Type" .= ("text/plain" :: String) ] , isBase64Encoded = False , body = "Done" } … Note that the Lambda code receives the request as JSON data structure with a body that contains the original HTTP request body. Which, in this case, is itself JSON, so we have to decode that. All that is left to do is to tell Telegram where this code lives: curl --request POST \ --url https://api.telegram.org/bot<token>/setWebhook --header 'content-type: application/json' --data '{"url": "https://api.kaleidogen.nomeata.de/telegram"}' As a little add on, I also created a Telegram game for Kaleidogen. A Telegram game is nothing but a webpage that runs inside Telegram, so it wasn’t much work to wrap the Web version of Kaleidogen that way, but the resulting Telegram game (which you can access via https://core.telegram.org/bots/games) still looks pretty neat. ## No /dev/dri/renderD128 I am mostly happy with this setup: My game is now available to more people in more ways. I don’t have to maintain any infrastructure. When nobody is using this bot no resources are wasted, and the costs of the service are neglectible -- this is unlikely to go beyond the free tier, and even if it would, the cost per generated image is roughly USD 0.000021. There is one slight disappointment, though. What I find most intersting about Kaleidogen from a technical point of view is that when you play it in the browser, the images are not generated by my code. Instead, my code creates a WebGL shader program on the fly, and that program generates the image on your graphics card. I even managed to make the GL rendering code work headlessly, i.e. from a command line program, using EGL and libgbm and a helper written in C. But it needs access to a graphics card via /dev/dri/renderD128. Amazon does not provide that to Lambda code, and neither do the other big Function-as-a-service providers. So I had to swallow my pride and reimplement the rendering in pure Haskell. So if you think the bot is kinda slow, then that’s why. Despite properly optimizing the pure implementation (the inner loop does not do allocations and deals only with unboxed Double# values), the GL shader version is still three times as fast. Maybe in a few years GPU access will be so ubiquitous that it’s even on Amazon Lambda; then I can easily use that. ### Monday Morning Haskell # Serving HTML with Servant We now have several different ways of generating HTML code from Haskell. Our last look at this issue explored the Lucid library. But in most cases you won't be writing client side Haskell code. You'll have to send the HTML you generate to your end-user, typically over a web server. So in this article we're going to explore the most basic way we can do that. We'll see how we can use the Servant library to send HTML in response to API requests. For a more in-depth tutorial on making a web app with Servant, read our Real World Haskell series! You can also get some more ideas for Haskell libraries in our Production Checklist. ## Servant Refresher Suppose we have a basic User type, along with JSON instances for it: data User = User { userId :: Int , userName :: String , userEmail :: String , userAge :: Int } instance FromJSON User where … instance ToJSON User where ... In Servant, we can expose an endpoint to retrieve a user by their database ID. We would have this type in our API definition, and a handler function. type MyAPI = "users" :> Capture "uid" Int :> Get '[JSON] (Maybe User) :<|> ... -- other endpoints userHandler :: Int -> Handler (Maybe User) … myServer :: Server MyAPI myServer = userHandler :<|> ... -- Other handlers Our endpoint says that when we get a request to /users/:uid, we'll return a User object, encoded in JSON. The userHandler performs the logic of retrieving this user from our database. We would then let client side Javascript code actually to the job of rendering our user as HTML. But let's flip the script a bit and embrace the idea of "server side rendering." Here, we'll gather the user information and generate HTML on our server. Then we'll send the HTML back in reply. First, we'll need a couple pieces of boilerplate. ## A New Content Type In the endpoint above, the type list '[JSON] refers to the content type of our output. Servant knows when we have JSON in our list, it should include a header in the response indicating that it's in JSON. We now want to make a content type for returning HTML. Servant doesn't have this by default. If we try to return a PlainText HTML string, the browser won't render it! It will display the raw HTML string on a blank page! So to make this work, we'll start with two types. The first will be HTML. This will be our equivalent to JSON, and it's a dummy type, with no actual data! The second will be RawHtml, a simple wrapper for an HTML bytestring. import Data.ByteString.Lazy as Lazy data HTML = HTML newtype RawHtml = RawHtml { unRaw :: Lazy.ByteString } We'll use the HTML type in our endpoints as we currently do with JSON. It's a content type, and our responses need to know how to render it. This means making an instance of the Accept class. Using some helpers, we'll make this instance use the content-type: text/html header. import Network.HTTP.Media ((//), (/:)) import Servant.API (Accept(..)) instance Accept HTML where content-type _ = "text" // "html" /: ("charset", "utf-8") Then, we'll link our RawHtml type to this HTML class with the MimeRender class. We just unwrap the raw bytestring to send in the response. instance MimeRender HTML RawHtml where mimeRender _ = unRaw This will let us use the combination of HTML content type and RawHtml result type in our endpoints, as we'll see. This is like making a ToJSON instance for a different type to use with the JSON content type. ## An HTML Endpoint Now we can rewrite our endpoint so that it returns HTML instead! First we'll make a function that renders our User. We'll use Lucid in this case: import Lucid renderUser :: Maybe User -> Html () renderUser maybeUser = html_$ do
head_ $do title_ "User Page" link_ [rel_ "stylesheet", type_ "text/css", href_ "/styles.css"] body_$ userBody
where
userBody = case maybeUser of
Nothing -> div_ [class_ "login-message"] $do p_ "You aren't logged in!" br_ [] a_ [href_ "/login"] "Please login" Just u -> div_ [class_ "user-message"]$ do
p_ $toHtml ("Name: " ++ userName u) p_$ toHtml ("Email: " ++ userEmail u)
p_ $toHtml ("Age: " ++ show (userAge u)) Now we'll need to re-write our endpoint, so it uses our new type: type MyAPI = "users" :> Capture "uid" Int :> Get '[HTML] RawHtml :<|> ... Finally, we would rewrite our handler function to render the user immediately! userHandler :: Int -> Handler RawHtml userHandler uid = do maybeUser <- fetchUser uid -- DB lookup or something return (RawHtml$ renderHtml (renderUser maybeUser))

Our server would now work, returning the HTML string, which the browser would render!

## Serving Static Files

There's one more thing we need to handle! Remember that HTML by itself is not typically enough. Our HTML files almost always reference other files, like CSS, Javascript, and images. When the user loads the HTML we send, they'll make another immediate request for those files. As is, our server won't render any styles for our user HTML. How do we serve these?

In Servant, the answer is the serveDirectoryWebApp function. This allows us to serve out the files from a particular file as static files. The first piece of this puzzle is to add an extra endpoint to our server definition. This will catch all patterns and return a Raw result. This means the contents of a particular file.

type MyAPI =
"users" :> Capture "uid" Int :> Get '[HTML] RawHtml :<|>
Raw

This endpoint must come last out of all our endpoints, even if we compose MyAPI with other API types. Otherwise it will catch every request and prevent other handlers from operating! This is like when you use a catch-all too early in a case statement.

Now for our "handler", we'll use the special serve function.

myServer :: Server MyAPI
myServer =
userHandler <|>:
serveDirectoryWebApp "static"

And now, if we have styles.css with appropriate styles, they'll render correctly!

## Conclusion

It's a useful exercise to go through the process of making our HTML content type manually. But Blaze and Lucid both have their own helper libraries to simplify this. Take a look at servant-blaze and servant-lucid. You can import the corresponding modules and this will handle the boilerplate for you.

Next week, we'll explore a few extra things we can do with Servant. We'll see some neat combinators that allow us to test our Servant API with ease!

Don't forget you can take a look at our Github repository for more details! This week's code is in src/BasicServant.hs.

# Anglo-Saxon and Hawaiiâ€˜ian Wikipedias

Yesterday browsing the [list of Wikipedias](https://meta.wikimedia.org/wiki/List_of_Wikipedias_ I learned there is an Anglo-Saxon Wikipedia. This seems really strange to me for several reasons: Who is writing it? And why?

And there is a vocabulary problem. Not just because Anglo-Saxon is dead, and one wouldn't expect it to have any words for anything not invented in the last 900 years or so. But also, there are very few extant Anglo-Saxon manuscripts, so we don't have a lot of vocabulary, even for things that had been invented 900 years ago.

Helene Hanff said:

I have these guilts about never having read Chaucer but I was talked out of learning Early Anglo-Saxon / Middle English by a friend who had to take it for her Ph.D. They told her to write an essay in Early Anglo-Saxon on any-subject-of-her-own-choosing. “Which is all very well,” she said bitterly, “but the only essay subject you can find enough Early Anglo-Saxon words for is ‘How to Slaughter a Thousand Men in a Mead Hall’.”

I don't read Anglo-Saxon but if you want to investigate, you might look at the Anglo-Saxon article about the Maybach Exelero (a hēahfremmende sportƿægn), Barack Obama, or taekwondo. I am pre-committing to not getting sucked into this, but sportƿægn is evidently intended to mean “sportscar” (the ƿ is an obsolete letter called wynn and is approximately a W, so that ƿægn is “wagon”) and I think that fremmende is “foreign” and hēah is something like "high" or "very". But I'm really not sure.

Anyway Wikipedia reports that the Anglo-Saxon Wikipedia has 3,197 articles (although most are very short) and around 30 active users. In contrast, the Hawai‘ian Wikipedia has 3,919 articles and only around 14 active users, and that is a language that people actually speak.

# Caricatures of Nazis and the number four in Russian

I was looking at this awesome poster of D. Moor (Д. Моор), one of Russia's most famous political poster artists:

This is interesting for a couple of reasons. First, in Russian, “Himmler”, “Göring”, “Hitler”, and “Goebbels” all begin with the same letter, ‘Г’, which is homologous to ‘G’. (Similarly, Harry Potter in Russian is Га́рри, ‘Garri’.)

I also love the pictures, and especially Goebbels. These four men were so ugly, each in his own distinctively loathsome way. The artist has done such a marvelous job of depicting them, highlighting their various hideousness. It's exaggerated, and yet not unfair, these are really good likenesses! It's as if D. Moor had drawn a map of all the ways in which these men were ugly.

My all-time favorite depiction of Goebbels is this one, by Boris Yefimov (Бори́с Ефи́мов):

For comparison, here's the actual Goebbels:

Looking at pictures of Goebbels, I had often thought “That is one ugly guy,” but never been able to put my finger on what specifically was wrong with his face. But since seeing the Efimov picture, I have never been able to look at a picture of Goebbels without thinking of a rat. D. Moor has also drawn Goebbels as a tiny rat, scurrying around the baseboards of his poster.

Anyway, that was not what I had planned to write about. The right-hand side of D. Moor's poster imagines the initial ‘Г’ of the four Nazis’ names as the four bent arms of the swastika. The captions underneath mean “first Г”, “second Г” and so on.

[ Addendum: Darrin Edwards explains the meaning here that had escaped me:

One of the Russian words for shit is "govno" (говно). A euphemism for this is to just use the initial g; so "something na g" is roughly equivalent to saying "a crappy something". So the title "vse na g" (all on g) is literally "they all start with g" but pretty blatantly means "they're all crap" or "what a bunch of crap". I believe the trick of constructing the swastika out of four g's is meant to extend this association from the four men to the entire movement…

Thank you, M. Edwards! ]

Looking at the fourth one, четвертое /chetvyertoye/, I had a sudden brainwave. “Aha,” I thought, “I bet this is akin to Greek “tetra”, and the /t/ turned into /ch/ in Russian.”

Well, now that I'm writing it down it doesn't seem that exciting. I now remember that all the other Russian number words are clearly derived from PIE just as Greek, Latin, and German are:

English German Latin Greek Russian
one ein unum εἷς (eis) оди́н (odeen)
two zwei duo δύο (dyo) два (dva)
three drei trēs τρεῖς (treis) три (tri)
four vier quattuor τέτταρες (tettares) четы́ре (chyetirye)
five fünf quinque πέντε (pente) пять (pyat’)

In Latin that /t/ turned into a /k/ and we get /quadra/ instead of /tetra/. The Russian Ч /ch/ is more like a /t/ than it is like a /k/.

The change from /t/ to /f/ in English and /v/ in German is a bit weird. (The Big Dictionary says it “presents anomalies of which the explanation is still disputed”.) The change from the /p/ of ‘pente’ to the /f/ of ‘five’ is much more typical. (Consider Latin ‘pater’, ‘piscum’, ‘ped’ and the corresponding English ‘father’, ‘fish’, ‘foot’.) This is called Grimm's Law, yeah, after that Grimm.

The change from /q/ in quinque to /p/ in pente is also not unusual. (The ancestral form in PIE is believed to have been more like the /q/.) There's a classification of Celtic lanugages into P-Celtic and Q-Celtic that's similar, exemplified by the change from the Irish patronymic prefix Mac- into the Welsh patronymic map or ap.

I could probably write a whole article comparing the numbers from one to ten in these languages. (And Sanskrit. Wouldn't want to leave out Sanskrit.) The line for ‘two’ would be a great place to begin because all those words are basically the same, with only minor and typical variations in the spelling and pronunciation. Maybe someday.

# Data structure challenge: solutions

In my previous post I challenged you to find a way to keep track of a sequence of slots in such a way that we can quickly (in $O(\lg n)$ or better) either mark any empty slot as full, or find the rightmost empty slot prior to a given index. When I posted it, I had two solutions in mind; thanks to all the excellent comments I now know of many more!

• There were quite a few answers which in some way or another boiled down to using a balanced tree structure.

In all these cases, the idea is that the tree stores the sorted indices of all empty slots. We can mark a slot full or empty by deleting or inserting from the tree structure, and we can find the rightmost empty slot not exceeding a given index by searching for the index and returning highest value in the tree which is less than the index being searched. It is well-known that all these operations can be done in $O(\lg n)$ time.

• David Barbour suggested something along similar lines, but somewhat more general: keep a finger tree with cached monoidal annotations representing both the total number of elements of each subtree, as well as the number of elements satisfying some proposition (such as the number of empty slots). This also allows performing the operations in $O(\lg n)$ time. This is in some sense similar to the previous suggestions, but it generalizes much more readily, since we can use this scheme to track any kind of monoidal annotation. I had thought of using a segment tree where slot $i$ stores the value $i$ when it is full, and $0$ when it is empty, and each node caches the max value contained in its subtree, allowing updates and queries to happen in $O(\lg n)$ time. This could also track arbitrary monoidal annotations, but using a finger tree is strictly more expressive since it also supports insertion and deletion (although that is not required for my original formulation of the problem).

• Albert also suggested using a van Emde Boas tree to achieve $O(\lg \lg n)$ performance. Van Emde Boas trees directly support a “predecessor” operation which finds the largest key smaller than a given value.

• Roman Cheplyaka suggested using some sort of dynamic rank/select: if we think of the sequence of slots as a bit vector, and represent empty slots by 1s and full slots by 0s, we can find the rightmost empty slot up to a given index by first finding the rank of the index, then doing a select operation on that rank. (I smell some kind of adjunction here: the composition of rank then select is a sort of idempotent closure operator that “rounds down” to the index of the rightmost preceding 1 bit. Maybe one of my readers can elaborate?) The tricky part, apparently, is doing this in such a way that we can dynamically update bits; apparently it can be done so the operations are still $O(\lg n)$ (Roman linked to this paper) but it seems complicated.

• One of my favorite solutions (which I also independently came up with) was suggested by Julian Beaumont: use a disjoint-set data structure (aka union-find) which stores a set for each contiguous (possibly empty) block of full slots together with its one empty predecessor (we can create a dummy slot on the left end to act as the “empty predecessor” for the first block of full slots). Each set keeps track of the index of its leftmost, empty, slot, which is easy to do: any time two sets are unioned we simply take the minimum of their empty slot indices (more generally, we can annotate the sets of a disjoint-set structure with values from any arbitrary monoid). To mark an empty slot as full, we simply union its set with the set of the slot to the left. To find the rightmost empty slot left of a given index, just look up the stored leftmost index corresponding to the set of the given index. Both these operations can thus be implemented in amortized $O(\alpha(n))$ time (where $\alpha$ is the inverse Ackermann function, hence essentially constant). Intuitively, I doubt it is possible to do any better than this. Curiously, however, unlike the other solutions, this solution depends crucially on the fact that we can never revert a full slot to empty!

• Apparently, this is known as the predecessor problem, and can also be solved with something called fusion trees which I had never heard of before.

# DWARF support in GHC (part 4)

This post is the fourth of a series examining GHC’s support for DWARF debug information and the tooling that this support enables:

• Part 1 introduces DWARF debugging information and explains how its generation can be enabled in GHC.
• Part 2 looks at a DWARF-enabled program in gdb and examines some of the limitations of this style of debug information.
• Part 3 looks at the backtrace support of GHC’s runtime system and how it can be used from Haskell.
• Part 4 examines how the Linux perf utility can be used on GHC-compiled programs.
• Part 5 concludes the series by describing future work, related projects, and ways in which you can help.

## Profiling with perf

The final application of debug information that we will examine is performance analysis, specifically profiling with the Linux perf tool.

perf is a statistical profiling tool relying on, among other things, the underlying machine’s performance monitoring hardware. It can be used not only to profile time, but also details of the microarchitecture such as cache-misses, front-end stalls, etc. With a bit of cleverness, one can even use perf to profile allocations of a Haskell program.

Moreover, all of this profiling functionality comes with a negligible impact on the throughput of the profiled profiling. Of course, for all of these benefits one trades off precision and ease of interpretation of the resulting profile.

Summary of the trade-offs made by cost-center profiler and statistical profiling with perf.
Attribute perf cost-center profiler
Runtime impact negligible anywhere from moderate to high
Relatability to source program often hard to relate back to source program profile structure directly reflects cost centers declared in source program
Provides call graphs yes, with limited depth yes
Provides call graphs on Haskell programs not currently yes
Profiled variables time, allocations, micro-architectural counters, system calls, user- and kernel-space probe points time, allocations
Determinism profile is statistical and will likely vary from run to run deterministic

To see this trade-off in action, let’s return to our vector-tests-O0 example. We can acquire a simple time profile by simply running the executable under perf record:

$perf record vector-tests-O0 The resulting profile can be examined via perf report. This will show you a TUI interface along the lines of Samples: 145K of event 'cycles:ppp', Event count (approx.): 98781024840 Overhead Command Shared Object Symbol 10.19% vector-tests-O0 vector-tests-O0 [.] integerzmwiredzmin_GHCziIntegerziType_quotInteger_info 5.02% vector-tests-O0 vector-tests-O0 [.] evacuate 3.58% vector-tests-O0 vector-tests-O0 [.] stg_upd_frame_info+0xffffffffffc00000 3.28% vector-tests-O0 vector-tests-O0 [.] QuickCheckzm2zi13zi2zmac90a2a0d9e0dd2c227d795a9d4d9de22a119c3781b679f3b245300e1b658c43_TestziQuickCheckziGen_zdwzdsgamma_info 2.96% vector-tests-O0 vector-tests-O0 [.] integerzmwiredzmin_GHCziIntegerziType_timesInteger_info 2.93% vector-tests-O0 vector-tests-O0 [.] integerzmwiredzmin_GHCziIntegerziType_minusInteger_info 2.67% vector-tests-O0 vector-tests-O0 [.] integerzmwiredzmin_GHCziIntegerziType_testBitInteger_info 2.34% vector-tests-O0 vector-tests-O0 [.] integerzmwiredzmin_GHCziIntegerziType_eqIntegerzh_info 2.30% vector-tests-O0 vector-tests-O0 [.] randomzm1zi1zmc60864d5616c60090371cdf8e600240f388e8a9bd87aa769d8045bda89826ee2_SystemziRandom_zdwrandomIvalInteger_info 2.18% vector-tests-O0 vector-tests-O0 [.] integerzmwiredzmin_GHCziIntegerziType_plusInteger_info 2.16% vector-tests-O0 vector-tests-O0 [.] integerzmwiredzmin_GHCziIntegerziType_divInteger_info 1.71% vector-tests-O0 vector-tests-O0 [.] QuickCheckzm2zi13zi2zmac90a2a0d9e0dd2c227d795a9d4d9de22a119c3781b679f3b245300e1b658c43_TestziQuickCheckziGen_zdwilog2_info 1.63% vector-tests-O0 vector-tests-O0 [.] stg_ap_p_info+0xffffffffffc00010 1.52% vector-tests-O0 vector-tests-O0 [.] integerzmwiredzmin_GHCziIntegerziType_remInteger_info 1.36% vector-tests-O0 vector-tests-O0 [.] hs_popcnt64 1.34% vector-tests-O0 vector-tests-O0 [.] stg_PAP_apply 1.23% vector-tests-O0 vector-tests-O0 [.] stg_ap_0_fast 1.22% vector-tests-O0 vector-tests-O0 [.] stg_gc_noregs 1.20% vector-tests-O0 vector-tests-O0 [.] stg_newByteArrayzh 1.15% vector-tests-O0 vector-tests-O0 [.] stg_ap_pp_info+0xffffffffffc00000 1.11% vector-tests-O0 vector-tests-O0 [.] _randomzm1zi1zmc60864d5616c60090371cdf8e600240f388e8a9bd87aa769d8045bda89826ee2_SystemziRandom_b1_siHV_entry 0.98% vector-tests-O0 vector-tests-O0 [.] integerzmwiredzmin_GHCziIntegerziType_geIntegerzh_info 0.98% vector-tests-O0 vector-tests-O0 [.] stg_BLACKHOLE_info+0xffffffffffc00069 0.92% vector-tests-O0 vector-tests-O0 [.] scavenge_block ... This profile shows costs from both the runtime system and compiled Haskell. If one selects an entry from this list with the arrow keys and presses enter perf will show the annotated assembler of this function, along with the associated Haskell source. In principle, perf record can also give us call-graph information. We will examine this in depth in the next section. ### Paths to call-stack profiling In addition to the flat profiles we saw earlier, perf can also produce call-graph profiles which sample not only the current location of execution but also callers to the enclosing function. Such call-graph profiles are produced with the perf record -g command. Unlike flat profiles, call-graph capture tends to be quite language- and hardware-dependent. Consequently, perf record currently provides three mechanisms for collecting call-stacks: • via the frame pointer register: GHC doesn’t track the frame-pointer like many imperative languages do, so this is unusable on Haskell programs. • via the last branch record (LBR): This takes advantage of the LBR, a hardware feature of Intel CPUs, and is only really usable on modern (e.g. Skylake and later) hardware. However, even on Skylake it provides only a very limited stack depth (32 branches on Skylake, of which many will be uninteresting). • via DWARF unwinding: This uses DWARF unwinding information to decode the state of the stack, which is captured at sampling-time. This capture can result in non-trivial runtime overhead. Recent Intel hardware also includes another, more expensive hardware mechanism, the Branch Trace Store, which could be useful for deeper call-graph acquisition but perf record support currently does not support using it for call-graph capture. Summary of trade-offs made by various call-stack collection mechanisms supported by perf record. attribute frame pointer LBR DWARF maximum depth unlimited 32 on Skylake unlimited runtime overhead negligible negligible small compatible with GHC execution model no yes? not currently Unfortunately, call-stack profiling support for Haskell is complicated by the fact that currently GHC uses machine registers in a rather non-standard way: rather than use the machine’s stack pointer register (e.g. %rsp on x86-64) to store the Haskell stack pointer GHC rather reserves another register (%rbp on x86-64). This choice simplifies interaction with foreign code at the expense of one usable register. While DWARF’s representation of unwinding information is sufficiently flexible to encode GHC’s non-standard stack register, the unwinding logic in the Linux kernel used by perf record sadly is not. For this reason, the unwind information produced by ghc -g is of little use when profiling with perf record -g. Other than DWARF unwinding, the only other viable call-graph option is the last-branch register (LBR). Unfortunately, the rather restrictive 32-branch depth limit and the fact that GHC does not use the traditional call instruction means that in practice the call-graphs produced by this method tend not to be very useful. In sum, currently I do not have a prescription for call-graph profiling of Haskell programs with perf. The next and final post will conclude this discussion of GHC’s debug information support by looking at future directions (including solutions to this call-graph problem) and other related projects. ## April 05, 2020 ### Mark Jason Dominus # Screensharing your talk slides is skeuomorphic Back when the Web was much newer, and people hadn't really figured it out yet, there was an attempt to bring a dictionary to the web. Like a paper dictionary, its text was set in a barely-readable tiny font, and there were page breaks in arbitrary places. That is a skeuomorph: it's an incidental feature of an object that persists even in a new medium where the incidental feature no longer makes sense. Anyway, I was scheduled to give a talk to the local Linux user group last week, and because of current conditions we tried doing it as a videoconference. I thought this went well! We used Jitsi Meet, which I thought worked quite well, and which I recommend. The usual procedure is for the speaker to have some sort of presentation materials, anachronistically called “slides”, which they display one at a time to the audience. In the Victorian age these were glass plates, and the image was projected on a screen with a slide projector. Later developments replaced the glass with celluloid or other transparent plastic, and then with digital projectors. In videoconferences, the slides are presented by displaying them on the speaker's screen, and then sharing the screen image to the audience. This last development is skeuomorphic. When the audience is together in a big room, it might make sense to project the slide images on a shared screen. But when everyone is looking at the talk on their own separate screen anyway, why make them all use the exact same copy? Instead, I published the slides on my website ahead of time, and sent the link to the attendees. They had the option to follow along on the web site, or to download a copy and follow along in their own local copy. This has several advantages: 1. Each audience person can adjust the monitor size, font size, colors to suit their own viewing preferences. With the screenshare, everyone is stuck with whatever I have chosen. If my font is too small for one person to read, they are out of luck. 2. The audience can see the speaker. Instead of using my outgoing video feed to share the slides, I could share my face as I spoke. I'm not sure how common this is, but I hate attending lectures given by disembodied voices. And I hate even more being the disembodied voice. Giving a talk to people I can't see is creepy. My one condition to the Linux people was that I had to be able to see at least part of the audience. 3. With the slides under their control, audience members can go back to refer to earlier material, or skip ahead if they want. Haven't you had the experience of having the presenter skip ahead to the next slide before you had finished reading the one you were looking at? With this technique, that can't happen. Some co-workers suggested the drawback that it might be annoying to try to stay synchronized with the speaker. It didn't take me long to get in the habit of saying “Next slide, #18” or whatever as I moved through the talk. If you try this, be sure to put numbers on the slides! (This is a good practice anyway, I have found.) I don't know if my audience found it annoying. The whole idea only works if you can be sure that everyone will have suitable display software for your presentation materials. If you require WalSoft AwesomePresent version 18.3, it will be a problem. But for the past 25 years I have made my presentation materials in HTML, so this wasn't an issue. If you're giving a talk over videoconference, consider trying this technique. [ Addendum: I should write an article about all the many ways in which the HTML has been a good choice. ] ### Well-Typed.Com # DWARF support in GHC (part 3) This post is the third of a series examining GHC’s support for DWARF debug information and the tooling that this support enables: • Part 1 introduces DWARF debugging information and explains how its generation can be enabled in GHC. • Part 2 looks at a DWARF-enabled program in gdb and examines some of the limitations of this style of debug information. • Part 3 looks at the backtrace support of GHC’s runtime system and how it can be used from Haskell. • Part 4 examines how the Linux perf utility can be used on GHC-compiled programs. • Part 5 concludes the series by describing future work, related projects, and ways in which you can help. ## Getting backtraces from the runtime We saw in the last post that GHC’s debug information can be used by the gdb interactive debugger to provide meaningful backtraces of running Haskell programs. However, debuggers are not the only consumer of these backtraces. For several releases now the GHC RTS has itself supported stack backtraces. This support can be invoked in two ways: • via the SIGQUIT signal • via the GHC.ExecutionStack interface in base In the first case, programs built with debug symbols and a libdw-enabled compiler can be sent the SIGQUIT signal,1 resulting in a stack trace being blurted to stderr: $ vector-tests-O0 >/dev/null & sleep 0.2; kill -QUIT %1
Caught SIGQUIT; Backtrace:
0x1387442    set_initial_registers (rts/Libdw.c:288.0)
0x1387abd    libdwGetBacktrace (rts/Libdw.c:259.0)
0x1373b26    backtrace_handler (rts/posix/Signals.c:534.0)
0x137f24f    _rts_stgzuapzup_ret (_build/stage1/rts/build/cmm/AutoApply.cmm:654.18)
0xa7cc80    _randomzm1zi1zmc60864d5616c60090371cdf8e600240f388e8a9bd87aa769d8045bda89826ee2_SystemziRandom_lvl6_siHP_entry (System/Random.hs:489.70)
0x12c52d8    integerzmwiredzmin_GHCziIntegerziType_minusInteger_info (libraries/integer-gmp/src/GHC/Integer/Type.hs:437.1)
0xa7d098    randomzm1zi1zmc60864d5616c60090371cdf8e600240f388e8a9bd87aa769d8045bda89826ee2_SystemziRandom_zdwrandomIvalInteger_info (System/Random.hs:487.20)
0x98da50    _QuickCheckzm2zi13zi2zmac90a2a0d9e0dd2c227d795a9d4d9de22a119c3781b679f3b245300e1b658c43_TestziQuickCheckziArbitrary_sat_sx8e_entry (Test/QuickCheck/Arbitrary.hs:988.26)
...
0x136571b    StgRunJmp (rts/StgCRun.c:370.0)
0x135f35e    hs_main (rts/RtsMain.c:73.0)
0x455df4    (null) (/opt/exp/ghc/ghc-8.10/vector/dist-newstyle/build/x86_64-linux/ghc-8.10.0.20191231/vector-0.13.0.1/t/vector-tests-O0/build/vector-tests-O0/vector-tests-O0)
0x7feecbd4ab8e    __libc_start_main (/nix/store/g2p6fwjc995jrq3d8vph7k45l9zhdf8f-glibc-2.27/lib/libc-2.27.so)
0x40a82a    _start (../sysdeps/x86_64/start.S:122.0)

This can be especially useful in diagnosing unexpected CPU usage or latency in long-running tasks (e.g. a server stuck in a loop).

Note, however, that this currently only provides a backtrace of the program’s main capability. Backtrace support for multiple capabilities is an outstanding task.

The runtime’s unwinding support can also be invoked from Haskell programs via the GHC.ExecutionStack interface. This provides:

-- | A source location.
data Location = {- ... -}

-- | Returns a stack trace of the calling thread or 'Nothing'
-- if the runtime system lacks libdw support.
getStackTrace :: IO (Maybe [Location])

In the future we would also like to provide

getThreadStackTrace :: ThreadId -> IO (Maybe [Location])

although this is an outstanding task.

This could be used in a number of ways:

1. when throwing an exception, one could capture the current stack for use in diagnostics output.

2. with getThreadStackTrace a monitoring library like ekg might provide the ability to enumerate the program’s threads and introspect on what they are up to.

We’ll look at (1) in greater detail below.

### Providing backtraces for exceptions

Attaching backtrace information to exceptions is fairly straightforward. For instance, one could provide

data WithStack e = WithStack (Maybe [Location]) e
instance Exception (WithStack e)

throwIOWithStack :: e -> IO a
throwIOWithStack exc = do
stack <- getStackTrace
throwM $WithStack stack exc throwWithStack :: e -> a throwWithStack = unsafePerformIO . throwIOWithStack -- | Attach a stack trace to any exception thrown by the enclosed action. -- Note that this is idempotent. addStack :: IO a -> IO a addStack = handle f where f :: SomeException -> IO b f exc | WithStack{} <- fromException exc = throwIO exc -- ensure idempotency f (SomeException exc) = throwIOWithStack exc Keep in mind that DWARF stack unwinding can incur a significant overhead (being linear in the depth of the stack with a significant constant factor). Consequently, it would be unwise to use throwIOWithStack indiscriminantly (e.g. when throwing an asynchronous exception to kill another thread). However, for truly “exceptional” cases (e.g. failing due to a non-existent file), it would offer quite some value. Unfortunately, the untyped nature of Haskell exceptions complicates the migration path for existing code. Specifically, if a library provides a function which throws MyException, users catching MyException would break if the library started throwing WithStack MyException. While this may be manageable in the case of user libraries, for packages at the heart of the Haskell ecosystem (e.g. base) this is a significant hurdle. Another design which avoids this migration problem is to incorporate backtraces directly into the base SomeException type, which is used to represent all thrown exceptions. Specifically, Control.Exception could then expose a variety of throwing functions, reflecting the many call stack mechanisms GHC now offers: data SomeException where SomeException :: forall e. Exception e => Maybe [Location] -- ^ backtrace, if available -> e -- ^ the exception -> SomeException -- | A representation of source locations consolidating 'GHC.Stack.SrcLoc', -- 'GHC.Stack.CostCentre', and 'GHC.ExecutionStack.Location'. data Location = {- ... -} -- | Throws an exception with no stack trace. throwIO :: e -> IO a -- | Throws an exception with a stack trace captured via -- 'GHC.Stack.getStackTrace'. throwIOWithExecutionStack :: e -> IO a -- | Throws an exception with a HasCallStack stack trace. throwIOWithCallStack :: HasCallStack => e -> IO a Of course, this raises the question of which call-stack method a particular exception ought to use. This is often unknowable, depending upon the user’s build configuration. Consequently, we might consider exposing something of the form: -- | Throws an exception with a stack trace using the most -- precise method available in the current build configuration. throwIOWithStack :: HasCallStack => e -> IO a throwIOWithStack | profiling_enabled = throwIOWithCostCentreStack | dwarf_enabled = throwIOWithExecutionStack | otherwise = throwIOWithCallStack Finally, new “catch” operations could be introduced providing the handler access to the exception’s stack: catchWithLocation :: IO a -> (e -> Maybe [Location] -> IO a) -> IO a Above are just two possible designs; I’m sure there are other points worthy of exploration. Do let me know if you are interesting in picking up this line of work. The next post will look at using the Linux perf utility to profile Haskell executables. 1. The unfortunate choice of the SIGQUIT signal to dump a backtrace originates from the Java virtual machine implementations, where this has been long available. GHC currently follows this precedent although some people believe that SIGQUIT should be used for… quitting. Do let us know on #17451 if you feel should we should reconsider the choice to follow Java on this point.↩︎ ## April 04, 2020 ### Well-Typed.Com # DWARF support in GHC (part 2) This post is the second of a series examining GHC’s support for DWARF debug information and the tooling that this support enables: • Part 1 introduces DWARF debugging information and explains how its generation can be enabled in GHC. • Part 2 looks at a DWARF-enabled program in gdb and examines some of the limitations of this style of debug information. • Part 3 looks at the backtrace support of GHC’s runtime system and how it can be used from Haskell. • Part 4 examines how the Linux perf utility can be used on GHC-compiled programs. • Part 5 concludes the series by describing future work, related projects, and ways in which you can help. ## Using gdb on Haskell programs gdb is an interactive debugger ubiquitous on Unix systems. Let’s try using it to run our executable and then break into execution with Ctrl-C: $ gdb vector-tests-O0
GNU gdb (GDB) 8.3
...
(gdb) run
Data.Vector.Fusion.Bundle:
fromList.toList == id: [OK, passed 100 tests]
toList.fromList == id: [OK, passed 100 tests]
...
postscanr: [OK, passed 100 tests]
postscanr': [OK, passed 100 tests]
^C
stg_ap_pp_fast () at _build/stage1/rts/build/cmm/AutoApply.cmm:3708
3708	        default: {
(gdb) bt
#0  stg_ap_pp_fast () at _build/stage1/rts/build/cmm/AutoApply.cmm:3708
#1  0x000000000137de10 in _rts_stgzuupdzuframe_ret ()
#2  0x0000000000997668 in QuickCheckzm2zi13zi2zmac90a2a0d9e0dd2c227d795a9d4d9de22a119c3781b679f3b245300e1b658c43_TestziQuickCheckziArbitrary_zdfCoArbitraryAll1_info () at Test/QuickCheck/Arbitrary.hs:1230
#3  0x000000000137de10 in _rts_stgzuupdzuframe_ret ()
...
#54 0x0000000000997668 in QuickCheckzm2zi13zi2zmac90a2a0d9e0dd2c227d795a9d4d9de22a119c3781b679f3b245300e1b658c43_TestziQuickCheckziArbitrary_zdfCoArbitraryAll1_info () at Test/QuickCheck/Arbitrary.hs:1230
#55 0x000000000137de10 in _rts_stgzuupdzuframe_ret ()
#56 0x0000000001316510 in ghczmprim_GHCziClasses_zdfEqBoolzuzdczeze_info () at libraries/ghc-prim/GHC/Classes.hs:205
#57 0x0000000000643698 in r5bd2_info () at Data/Vector.hs:289
#58 0x000000000137de10 in _rts_stgzuupdzuframe_ret ()
#59 0x00000000009bc478 in s1TRr_info () at Test/QuickCheck/Property.hs:225
#60 0x00000000009bbc88 in s1TQC_info () at Test/QuickCheck/Property.hs:190
#61 0x0000000001375f50 in _rts_stgzucatchzuframe_ret () at rts/Exception.cmm:335
#62 0x00000000009bbe08 in s1TQX_info () at Test/QuickCheck/Property.hs:252
#63 0x00000000009b7670 in s1TIa_info () at Test/QuickCheck/Property.hs:216
...
#68 0x00000000009b7670 in s1TIa_info () at Test/QuickCheck/Property.hs:216
#69 0x00000000009b7c18 in s1TIJ_info () at Test/QuickCheck/Property.hs:207
#70 0x00000000009b7670 in s1TIa_info () at Test/QuickCheck/Property.hs:216
#71 0x00000000009b7670 in s1TIa_info () at Test/QuickCheck/Property.hs:216
#72 0x00000000009bff88 in QuickCheckzm2zi13zi2zmac90a2a0d9e0dd2c227d795a9d4d9de22a119c3781b679f3b245300e1b658c43_TestziQuickCheckziProperty_reduceRose1_info () at Test/QuickCheck/Property.hs:232
#73 0x00000000009e18a8 in s2eT1_info () at Test/QuickCheck/Test.hs:331
#74 0x0000000001375f50 in _rts_stgzucatchzuframe_ret () at rts/Exception.cmm:335
#75 0x00000000009e2380 in QuickCheckzm2zi13zi2zmac90a2a0d9e0dd2c227d795a9d4d9de22a119c3781b679f3b245300e1b658c43_TestziQuickCheckziTest_quickCheck4_info () at Test/QuickCheck/Test.hs:333
#76 0x00000000007479b0 in s4py_info () at Test/Framework/Providers/QuickCheck2.hs:114
#77 0x0000000000755cf8 in s2dc_info () at Test/Framework/Improving.hs:68
#78 0x0000000000762080 in r6ny_info () at Test/Framework/Runners/ThreadPool.hs:62
#79 0x0000000001375f50 in _rts_stgzucatchzuframe_ret () at rts/Exception.cmm:335
#81 0x000000000136571b in StgRunIsImplementedInAssembler () at rts/StgCRun.c:370
#82 0x000000000136241b in schedule (task=0x175aa00, initialCapability=<optimized out>) at rts/Schedule.c:467
#83 scheduleWaitThread (tso=<optimized out>, ret=ret@entry=0x0, pcap=pcap@entry=0x7fffffff5210) at rts/Schedule.c:2600
#84 0x000000000138b204 in rts_evalLazyIO (cap=cap@entry=0x7fffffff5210, p=p@entry=0x14122d0, ret=ret@entry=0x0) at rts/RtsAPI.c:530
#85 0x000000000135f35e in hs_main (argc=<optimized out>, argv=<optimized out>, main_closure=0x14122d0, rts_config=...) at rts/RtsMain.c:72
#86 0x0000000000455df4 in main ()
(gdb) 

Here we see the state of the stack around one second into the execution. Note that I have elided a good number (around 50) repeated CoArbitraryAll1 frames. There are four kinds of symbol names seen here:

1. C functions provided by the runtime system (e.g. rts_evalLazyIO)
2. C– functions provided by the runtime system (e.g. _rts_stgzuupdzuframe_ret)
3. Haskell functions exported by modules (e.g. ghczmprim_GHCziClasses_zdfEqBoolzuzdczeze_info)
4. Haskell functions internal to modules (e.g. s1TRr_info)

In cases (2) and (3) the names are derived from source program names via GHC’s Z encoding) symbol name mangling scheme. For instance,

 mangled ghczmprim_GHCziClasses_zdfEqBoolzuzdczeze_info demangled ghc-prim_GHC.Classes_$fEqBool_$c==_info

In principle gdb could be taught to perform this demangling on our behalf.

Aside from symbol names, the above backtrace also gives us (often more informative) source locations. However, it should be noted that this line information can be slightly misleading at times for reasons that we will describe below.

### Surprises in debug information

As noted above, GHC backtraces can at times be slightly surprising. To see why, consider the simple program,

f :: [Int] -> Int
f = sum . map (* 42)

The code generated for this function (when compiled with -O) will inevitably contain a multiplication instruction and an addition instruction. However, which source location should these be attributed to?1 One might say f, or somewhere in Data.List.sum, or somewhere in foldl (which sum is defined in terms of). Moreover, these sets of options can grow to be quite large, particularly in cases where stream fusion is involved.

Unfortunately, the DWARF specification requires that GHC choose precisely one of these locations when producing debug information. At the moment GHC uses heuristics to choose from among the options, but these heuristics, like all heuristics, can sometimes produce less than helpful results. GHC can also emit an extended DWARF form which encodes the entire set of source locations. However, there is currently no widely available tooling which can consume this information.

In addition to the above wrinkle there is another feature of GHC’s produced code that poses a challenge when producing debug information: tail calls. In a language lacking tail calls, a stack frame typically contains a pointer to the instruction following the function call that pushed the frame. This serves as the address to which execution will return after the callee finishes and is guaranteed to be in the same procedure as the call itself. For this reason, stack backtraces always reflect the history of a program’s execution. That is, a stack containing the frames f; g; h means:

• f called g
• g called h
• h is currently executing

However, tail calls break this model. For instance, consider the Haskell function:

tail_caller :: [Int] -> Int
tail_caller xs = sum xs

sum :: [Int] -> Int
sum xs = foldl' (+) 0 xs

Without optimisation, the code generated for tail_caller will simply jump straight to the entry-point of sum; no stack entry will be left to mark the fact that sum was called by tail_caller. This, coupled with the source location ambiguity described above, can at times lead to slightly surprising backtraces. If you find a case that you think makes little sense, please do open a ticket. We are always looking for ways to improve the quality of the debug information produced by GHC.

### Breakpoints

In addition to printing backtraces, we can also set breakpoints:

(gdb) break Test/Framework/Improving.hs:68
(gdb) cont
Continuing.

Breakpoint 1, s2dc_info () at Test/Framework/Improving.hs:68
68	Test/Framework/Improving.hs: No such file or directory.
(gdb) bt
#0  s2dc_info () at Test/Framework/Improving.hs:68
#1  0x0000000000762080 in r6ny_info () at Test/Framework/Runners/ThreadPool.hs:62
#2  0x0000000001375f50 in _rts_stgzucatchzuframe_ret () at rts/Exception.cmm:335
#4  0x000000000136571b in StgRunIsImplementedInAssembler () at rts/StgCRun.c:370
#5  0x000000000136241b in schedule (task=0x175aa00, initialCapability=<optimized out>) at rts/Schedule.c:467
#6  scheduleWaitThread (tso=<optimized out>, ret=ret@entry=0x0, pcap=pcap@entry=0x7fffffff5210) at rts/Schedule.c:2600
#7  0x000000000138b204 in rts_evalLazyIO (cap=cap@entry=0x7fffffff5210, p=p@entry=0x14122d0, ret=ret@entry=0x0) at rts/RtsAPI.c:530
#8  0x000000000135f35e in hs_main (argc=<optimized out>, argv=<optimized out>, main_closure=0x14122d0, rts_config=...) at rts/RtsMain.c:72
#9  0x0000000000455df4 in main ()

However, at the moment this is little more than a parlor trick since our debug information does not encode information about in-scope bindings (fixing this is a significant project in its own right and is not currently planned).

In the next post we will see how we can use GHC’s native unwinding support to gather stack traces from within Haskell programs.

1. The very question of what it means to “attribute” a machine operation to a source location is itself a tricky one to precisely define in a lazy language like Haskell. Finding such a definition was a central part of Peter Wortmann’s dissertation. I would encourage anyone interested to peruse Chapter 4 of this very readable work.↩︎

Thank you, XKCD!

# The New New Yorker

Courtesy of Tom The Dancing Bug.

# DWARF support in GHC (part 1)

This post is the first of a series examining GHC’s support for DWARF debug information and the tooling that this support enables:

• Part 1 introduces DWARF debugging information and explains how its generation can be enabled in GHC.
• Part 2 looks at a DWARF-enabled program in gdb and examines some of the limitations of this style of debug information.
• Part 3 looks at the backtrace support of GHC’s runtime system and how it can be used from Haskell.
• Part 4 examines how the Linux perf utility can be used on GHC-compiled programs.
• Part 5 concludes the series by describing future work, related projects, and ways in which you can help.

## DWARF debugging information

For several years now GHC has had support for producing DWARF debugging information. DWARF is a widely-used format (used by Linux and several BSDs) for representing debug information (typically embedded in an executable) for consumption by runtime systems, profiling, and debugging tools. It allows representation of a variety of information:

• line information mapping instructions back to their location in the source program (e.g. the instruction at address x originated from myprogram.c line 42).

• unwind information allowing call chains to be reconstructed from the runtime state of the execution stack (e.g. the program is currently executing f, which was called from g, which was called from h, …)

• type information, allowing debugging tools to reconstruct the structure and identity of values from the runtime state of the program (e.g. when the program is executing the instruction at address x, the value sitting in the $rax register is a pointer to a Foobar object. Collectively, this information is what allows debuggers (e.g. gdb) and profiling tools (e.g. perf) to do what they do. The effort to add DWARF support to GHC started with Peter Wortmann’s dissertation work which introduced the ability for GHC to emit basic line and unwind information in its executables. This support has matured considerably over the past few years and should finally be ready for use with GHC 8.10. There are a few potential use-cases for DWARF information: 1. Use in native debugging tools (e.g. gdb) 2. Dumping runtime call stacks to the console using the SIGQUIT signal; this is particularly useful in production 3. Computing runtime call stacks from within the program (using the GHC.ExecutionStack interface in base) 4. Statistical profiling using tools like perf. 5. Capturing call-stacks in exceptions for reporting to the user We will discuss all of these in this series of blog posts. The rest of this first post will examine how to compile a DWARF-enabled binary. ## First steps As of GHC 8.10.2, GHC HQ will provide DWARF-enabled binary distributions for Debian 9, Debian 10, and Fedora 27 (as of 8.10.1 only Debian 9 is provided). These binary distributions differ in two respects from the non-DWARF distributions: • all provided libraries (e.g. base, filepath, unix, etc.) are built with debug information. • the runtime system is built with a dependency on the libdw library (provided by the elfutils package). Like other compilers, debug information support under GHC is enabled with the -g flag. This flag can be passed a numeric “debug level”, which determines the detail (and, consequently, size) of the debug information that is produced. These levels are described in the GHC user guide. When using native debug information we must keep in mind that all code linked into an executable (e.g. native libraries, Haskell libraries, and the code of the executable itself) must be built with debug information. Failure to ensure this will result in truncated backtraces. To build a package with native debug information we can use cabal-install’s --enable-debug-info flag (or, below, its equivalent key in cabal.project). Here, we will use the vector testsuite as a non-trivial example: $ git clone https://github.com/haskell/vector
$cd vector$ cat >>cabal.project.local <<EOF

package vector
tests: True

package *
debug-info: 2
EOF
$cabal new-build vector-tests-O0 For the sake of demonstration we built the vector-tests-O0 testsuite (which builds vector’s tests without optimisation) since this provides slightly more interesting stacktraces. We chose debug level 2 as we will not be using the GHC-specific debug information emitted by debug level 3. At this point we have a DWARF-annotated binary. This binary is functionally identical to a non-annotated build (apart from containing quite a few more bits, weighing in at over 150 megabytes). Most importantly, no optimizations were inhibited by enabling debug information. In the next post we will begin to see what this extra 100 megabytes of debug information gives us. ## April 02, 2020 ### Tweag I/O # Eager vs. Lazy Instantiation: Making an Informed Decision Gert-Jan Bottu During my internship at Tweag, I've been given the opportunity to work on GHC alongside Simon Peyton Jones at Microsoft Research Cambridge (MSRC) and my Tweag supervisor Richard Eisenberg. During a visit at MSRC, I got caught up in a discussion regarding the lazy or eager instantiation of type variables in GHC. This discussion serves as a great showcase for how language design works in practice: it is a hard and involved process where not everyone will agree on the same answers. In this blog post I will show both sides of the discussion in order to: 1. Showcase the kind of tradeoffs that are made in language design. 2. Clarify this discussion and its relevance for the Haskell language and its community. 3. Make sure that you, as a Haskell developer, are sufficiently informed when you encounter the problems described here in your day to day life. I thus wholeheartedly encourage you to visit the GitHub thread after reading this post! You can find all the code examples from this blog post here. ## What's the Problem? Over the past few months, we encountered the following three issues: ### Synonyms Consider the following example, where we define myConst to be a synonym for const, whose type is forall a b. a -> b -> a: myConst = const  When inferring the type for myConst, the compiler instantiates the type of const, meaning that the type variables a and b get replaced by (potentially yet unknown) types. Afterwards, the compiler generalises this type, meaning that all remaining unknown types get bound using new forall binders. But should the resulting type for myConst be forall a b. a -> b -> a or forall b a. a -> b -> a? There is no way for the compiler to know the intended order of the generalised variables. While these types look equivalent, they are most certainly not in combination with type applications. Should the type of myConst @Int be forall b. Int -> b -> Int or forall a. a -> Int -> a? For this reason GHC only allows type application for user defined type variables, called "specified" variables. Compiler generated variables, called "inferred" variables, can never be manually instantiated, and GHC marks them using braces, resulting in forall {a} {b}. a -> b -> a. If you want to know more about this distinction, you can read my last blog post, which goes into much greater depth on the topic. This means that myConst and const do not have the same type, which contradicts our intuition that they should behave identically. ### Type Abstraction While discussing type inference for type abstraction in lambda binders, another issue popped up, this time related to type abstraction, as discussed in this accepted (but not yet implemented) GHC proposal. Just as you would write a lambda binder \ x -> e to introduce a term variable x and bring it into scope in e, this proposal allows \ @a -> e to bind a type variable a and bring it into scope in e. At the moment, the proposal only discusses this feature under type-checking, meaning that a type signature needs to be present. However, in order to extend this feature with type inference, things got a bit more hairy. Consider the following example: foo = \ @a (x :: a) -> x  We would expect the inferred type for foo to be forall a. a -> a, since we explicitly abstract over a: a is, literally, specified. However, under eager type instantiation, the type for foo would actually be forall {a}. a -> a. ### Nested Foralls A third issue appears when instantiating types with nested forall binders. Consider the following example: {-# LANGUAGE RankNTypes #-} f :: forall a. a -> forall b. b -> b f x = id g = f  We ask GHC to infer the type of g for us, and as you might expect from the myConst example above, the types of f and g are not identical: *Main> :set -fprint-explicit-foralls *Main> :info f f :: forall a. a -> forall b. b -> b *Main> :info g g :: forall {a}. a -> forall b. b -> b  While typing g, GHC instantiates and generalises the binder for a. As a consequence, the type of g now contains a mix of type variables which can and can not be manually instantiated. This hard-to-predict handling of type variables is quite confusing. This behaviour was introduced in a recent GHC proposal and is expected to be released in GHC 8.12. If you want to try this example for yourself, you can download our prebuilt GHC compiler using docker pull gertjanb/simplified-subsumption-ghc, and run it using docker run -it gertjanb/simplified-subsumption-ghc. ## Eager vs. Lazy Instantiation All three issues explained above involve type variable instantiation, so let us explore this in a bit more detail. When type inference encounters the variable f in the last example, a choice emerges when assigning it a type: • GHC 8.10 instantiates types eagerly, resulting in the following type: alpha -> forall b. b -> b (note that we use greek letters to denote types yet to be determined). Finally, generalisation produces for g the type forall {a}. a -> forall b. b -> b. • Another choice is to instantiate lazily, that is, returning the type of f as is, and only instantiating it when needed. The function g thus gets assigned the type of f: forall a. a -> forall b. b -> b. ## A Silver Bullet! Up to GHC 8.10, type variables have always been instantiated eagerly. However, lazy instantiation might solve our issues above! Let's investigate: ### Synonyms When inferring a type for myConst under lazy instantiation, the type variables of const would not get instantiated, resulting in the inferred type forall a. a -> a. This is the type we would expect, and indeed GHC no longer treats const and myConst differently. ### Type Abstraction When inferring a type for foo under lazy instantiation, the user-specified type variable a does not get abstracted over, and the inferred type becomes forall a. a -> a. This is again the type we would expect for foo, and using this type as a signature works like a charm. ### Simplified Subsumption A similar story holds for our g example above. Since type variables do not get instantiated unless absolutely necessary, its type becomes identical to the type of f: forall a. a -> forall b. b -> b with no inferred variables. ## ... But Comes at a Cost. Unfortunately, as pointed out by Simon Peyton Jones, lazy instantiation might not be the amazing solution we hoped it would be. While figuring out the details of how this should work in practice, a number of new issues popped up: ### Case expressions Type inference for case expressions requires the compiler to assign a monomorphic type to each of the branches. This means that the type can not contain top-level forall binders or binders on the right of function arrows. In order to illustrate this restriction, consider the following example: bar1 True = \ x -> id bar2 True = \ x -> id bar2 False = error "Impossible case for reasons"  Under eager instantiation, the inferred types are as you would expect: when encountering id, its forall type is instantiated eagerly. This results in the type forall {a} {b}. Bool -> a -> b -> b for both bar1 and bar2. The case of lazy instantiation is more interesting. The function bar1 gets the type forall {a}. Bool -> a -> forall b. b -> b. But by adding the catch-all sanity check in bar2, we are actually introducing a case expression, thus forcing the compiler to return a monomorphic type. The foralls get instantiated, resulting in the same type we get from eager instantiation. ### Evaluation So far the impact of our discussion has mainly been limited to type level differences. However, these type level choices do have an impact on the actual evaluation of the program. Consider the following example: {-# LANGUAGE BangPatterns #-} diverge = let !x = undefined in ()  Note that the bang forces GHC to evaluate x eagerly. We would thus expect this function to throw an exception. However, while this is certainly the case under eager instantiation, this does not hold when variables are instantiated lazily. This happens because the type of undefined is forall a. HasCallStack => a, and eager instantiation will instantiate the type a and the HasCallStack argument. Evaluating this instantiated undefined throws an exception, as we would expect. However, under lazy instantiation, the type of undefined remains effectively a function type (from HasCallStack evidence to type a), and functions do not diverge. ### Implicit Arguments The story becomes even more involved when we include GHC's implicit arguments. While recognizing that this extension is not widely used, it remains important to take all the compiler features into account when considering changes. Take for instance the following code example: {-# LANGUAGE ImplicitParams #-} x :: (?i :: Int) => Int x = ?i y :: (?i :: Int) => Int y = let ?i = 5 in x z :: Int z = let ?i = 6 in y  Again, the choice of either eager or lazy instantiation determines the evaluation outcome. Similarly to before, under eager instantiation, while typing y, the type of x gets instantiated right away with ?i = 5. On the other hand, under lazy instantiation, this is postponed as far as possible. Concretely, at the very end when typing z, the implicit variable ?i has to be instantiated, in this case with ?i = 6. This means that z evaluates to 5 under eager instantiation (as most people would expect), but evaluates to 6 under lazy instantiation. ## Compromises Have to be Made I hope this blog post illustrates that these decisions are just plain hard, and ultimately quite subjective. Both eager and lazy instantiation are reasonable approaches and lead to type-safe languages. In the end the choice comes down to taste and desire for the best user experience. After going back and forth a couple of times, we finally concluded that eager instantiation seems most sensible: while lazy instantiation would certainly solve the three issues described above, it unfortunately comes at too heavy a cost. Instead, we will just have to accept the strangeness of synonyms not behaving like we expect and shallow instantiation making some variables inferred while keeping others specified. Regarding type abstraction in lambda binders, to avoid the issues described above, we propose limiting this feature to type checking only. ## Conclusion GHC is made by the Haskell community, so it's important that you're informed about this discussion. I thus wholeheartedly encourage you to visit the Github page and continue reading about the topic. ## April 01, 2020 ### Joachim Breitner # 30 years of Haskell Vitaly Bragilevsky, in a mail to the GHC Steering Committee, reminded me that the first version of the Haskell programming language was released exactly 30 years ago. On April 1st. So that raises the question: Was Haskell just an April fool's joke that was never retracted? My own first exposure to Haskell was in April 2005; the oldest piece of Haskell I could find on my machine is this part of a university assignment from April: > pascal 1 = [1] > pascal (n+1) = zipWith (+) (x ++ [0]) (0 : x) where x = pascal n This means that I now have witnessed half of Haskell's existence. I have never regretted getting into Haskell, and every time I come back from having worked in other languages (which all have their merits too), I greatly enjoy the beauty and elegance of expressing my ideas in a lazy and strictly typed language with a concise syntax. I am looking forward to witnessing (and, to a very small degree, shaping) the next 15 years of Haskell. ### Michael Snoyman # A Lazy Rust Compiler Let’s face it: we all know that Rust is the best language on the market today, hands down. It has stolen all of the good features from all other languages out there, added some of its own, and is just cranking on awesomeness. The handwriting is on the wall: Rust wins, and all software in the next five years will be rewritten in Rust. But there’s one prominent language feature that the Rust authors forgot about: laziness. Laziness is arguably the defining feature of Haskell, and despite all of Haskell’s many other limitations (strong typing, Software Transactional Memory, etc), laziness stands apart as a feature universally accepted as good. It’s a true shame that the Rust language authors were so short sighted as to include unimportant features like enums and pattern matching in their language, yet leave out laziness. And so I’m happy to announce a new project: Lazy Rust, aka Lust. Lust is going to be a source-compatible version of the Rust language. It has the capability to compile all existing Rust code without modification. However, it will automatically, transparently, and quickly perform laziness rewrites. This will give massive performance speedups in many common cases, be no less efficient in others, and introduce zero differences in runtime behavior otherwise. I’ve written an extensive proof for these claims, but there’s no room in this already overly long blog post to include it. The question of course is: how will we get Lust out into the world? And this is the cool thing: we get it for free by rewriting the Rust compiler in Haskell. You see, Haskell provides laziness using a really simple feature, called thunks. We could try to implement thunks directly in Rust today, but that’s a lot of work. Instead, rewriting a parser, type checker, code generator, and other tooling in Haskell is far easier. There’s only one downside to this implementation strategy: all Lust programs will have a runtime dependency on Haskell’s premier compiler, GHC. However, this is a small price to pay for the massive benefits that laziness will bring in practice. ## March 31, 2020 ### Joachim Breitner # Animations in Kaleidogen A while ago I wrote a little game (or toy) called Kaleidogen. It is a relatively contemplative game where, starting from just unicolored disks, you combine abstract circular patterns to breed more interesting patterns. See my FARM 2019 talk for more details, or check out the source repository. It has mostly been quiet with this game, but I finally got around to add a little bit of animation: When you have bred one of these patterns, you can animate its genesis, from nothing to a complex patterns, as you can see in this screencast: Kaleidogen, animated By the way: I am looking for collaborators who help me to get this into the Play Store properly, so let me know if you want to play around with Haskell, Android, Nix, OpenGL and cross-compilation. ## March 30, 2020 ### Monday Morning Haskell # Lucid: Another HTML Option We're currently looking at different Haskell libraries for generating HTML code. We've already explored how to do this a bit in Reflex FRP and using the Blaze library. This week, we'll consider one more library, Lucid. Then next week we'll start looking at some more complex things we can do with our generated code. The approaches from Reflex and Blaze have a lot of similarities. In particular, both use monadic composition for building the tree. Lucid will continue this theme as well, and it will generally have a lot in common with Blaze. But there are a few differences as well, and we'll explore those a bit. If you want to play around with the code from this article a bit more, you should clone our Github repository! This repo contains the simpler Blaze and Html code, as well as some ways we'll use it. If you're ready to work on a full web application, you can also read our Real World Haskell series. This will walk you through the basics of building a web backend with a particular library stack. You can also download our Production Checklist to learn about more options! ## Similar Basics Hopefully, you've already gotten familiar with Blaze's syntax. But even if you're not, we're going to dive straight into Lucid. This syntax is pretty straightforward, as long as you know the basic HTML markup symbols. Here's the input form example we did last time, only now, using Lucid: {-# LANGUAGE OverloadedStrings #-} module LucidLib where import Lucid mainHtml :: Html () mainHtml = html_$ do
head_ $do title_ "Random Stuff" link_ [rel_ "stylesheet", type_ "text/css", href_ "screen.css"] body_$ do
h1_ "Welcome to our site!"
h2_ $span_ "New user?" div_ [class_ "create-user-form"]$ do
form_ [action_ "createUser"] $do input_ [type_ "text", name_ "username"] input_ [type_ "email", name_ "email"] input_ [type_ "password", name_ "password"] input_ [type_ "submit", name_ "submit"] br_ [] h2_$ span_ "Returning user?"
div_ [class_ "login-user-form"] $do form_ [action_ "login"]$ do
input_ [type_ "email", name_ "email"]
input_ [type_ "submit", name_ "submit"]
br_ []

Right away things look pretty similar. We use a monad to compose our HTML tree. Each new action we add in the monad adds a new item in the tree. Our combinators match the names of HTML elements.

But there are, of course, a few differences. For example, we see lists for attributes instead of using the ! operator. Every combinator and attribute name has underscores. Each of these differences has a reason, as outlined by the author Chris Done in his blog post. Feel free to read this for some more details. Let's go over some of these differences.

## Naming Consistency

Let's first consider the underscores in each element name. What's the reason behind this? In a word, the answer is consistency. Let's recall what blaze looks like:

import Text.Blaze.Html5 as H
import Text.Blaze.Html5.Attributes as A

blazeHtml :: Html
blazeHtml = docTypeHtml $do H.head$ do
H.title "Our web page"
body $do h1 "Welcome to our site!" H.div ! class_ "form"$ do
p "Hello"

Notice first the qualified imports. Some of the elements conflict with Prelude functions. For example, we use head with normal lists and div for mathematics. Another one, class_, conflicts with a Haskell keyword, so it needs an underscore. Further, we can use certain combinators, like style, either as a combinator or as an attribute. This is why we have two imports at the top of our page. It allows us to use H.style as a combinator or A.style as an attribute.

Just by adding an underscore to every combinator, Lucid simplifies this. We only need one import, Lucid, and we have consistency. Nothing needs qualifying.

## Attribute Lists

Another difference is attributes. In Blaze, we used the ! operator to compose attributes. So if we want several attributes on an item, we can keep adding them like so:

-- Blaze
stylesheet :: Html
stylesheet =
link ! rel "stylesheet" ! href "styles.css" ! type_ "text/css"

Lucid's approach rejects operators. Instead we use a list to describe our different attributes. Here's our style element in Lucid:

-- Lucid
stylesheet :: Html ()
stylesheet =
link_ [rel_ "stylesheet", type_ "text/css", href_ "screen.css"]

In a lot of ways this syntax is cleaner. It's easier to have lists as extra expressions we can reuse. It's much easier to append a new attribute to a list than to compose a new expression with operators. At least, you're much more likely to get the type signature correct. Ultimately this is a matter of taste.

One reason for Blaze's approach is to avoid empty parameters on a large number of combinators. If a combinator can take a list as a parameter, what do you do if there are no attributes? You either have [] expressions everywhere or you make a whole secondary set of functions.

Lucid gets around this with some clever test machinery. The following two expressions have the same type, even though the first one has no attributes!

aDiv :: Html ()
aDiv = div_ $p "Hello" aDiv2 :: Html () aDiv2 = div_ [class_ "hello-div"]$ p_ "Hello"

Due to the class Term, we can both have a normal Html element follow our div, or we can list some attributes first. Certain empty combinators like br_ don't fit this pattern as well. They can't have sub-elements, so we need the extra [] parameter, as you can see above. This pattern is also what enables us to use the same style combinator in both situations.

## Rendering

There are other details as well. The Monad instance for Html is better defined in Lucid. Lucid's expressions also have a built-in Show instance, which makes simple debugging better.

For Blaze's part, I'll note one advantage comes in the rendering functionality. It has a "pretty print" renderer, that makes the HTML human readable. I wasn't able to find a function to do this from poking around with Lucid. You can render in Lucid like so:

import Lucid

main :: IO ()
main = renderToFile "hello.html" mainHtml

mainHtml :: Html ()
mainHtml = ...

You'll get the proper HTML, but it won't look very appetizing.

## Conclusion

So at the end of the day, Blaze and Lucid are more similar than they are different. So the choice is more one of taste. Now, we never want to produce HTML in isolation. We almost always want to serve it out to users of a more complete system. Next week, we'll start looking at some options for using the Servant library to send HTML to our end users.

There are many different pieces to building a web application! For instance, you'll need a server backend and a database! Download our Production Checklist to learn some more libraries you can use for those!

# The Diametric Safety Case Manager

In my work life I have spent the past two years writing the Diametric Safety Case Manager (DSM). This uses Goal Structuring Notation to represent safety arguments and bow-tie diagrams to represent the relationships between events and hazards.

Underlying these diagrams is a safety model; everything on a diagram is represented by an entity in the underlying model. A graphical query notation lets the user create tables and reports based on the contents of the model. Here is an example showing how a Failure Modes and Effects Analysis (FMEA) can be constructed from data entered in bow-tie diagrams.

The DSM is written in Haskell using GTK3 and Reactive Banana. I've blogged about the underlying mechanism here.

# Pauli chess

Last week Pierre-Françoys Brousseau and I invented a nice chess variant that I've never seen before. The main idea is: two pieces can be on the same square. Sometimes when you try to make a drasatic change to the rules, what you get fails completely. This one seemed to work okay. We played a game and it was fun.

Specfically, our rules say:

1. All pieces move and capture the same as in standard chess, except:

2. Up to two pieces may occupy the same square.

3. A piece may move into an occupied square, but not through it.

4. A piece moving into a square occupied by a piece of the opposite color has the option to capture it or to share the square.

5. Pieces of opposite colors sharing a square do not threaten one another.

6. A piece moving into a square occupied by two pieces of the opposite color may capture either, but not both.

7. Castling is permitted, but only under the same circumstances as standard chess. Pieces moved during castling must move to empty squares.

### Miscellaneous notes

Pierre-Françoys says he wishes that more than two pieces could share a square. I think it could be confusing. (Also, with the chess set we had, more than two did not really fit within the physical confines of the squares.)

Similarly, I proposed the castling rule because I thought it would be less confusing. And I did not like the idea that you could castle on the first move of the game.

The role of pawns is very different than in standard chess. In this variant, you cannot stop a pawn from advancing by blocking it with another pawn.

Usually when you have the chance to capture an enemy piece that is alone on its square you will want to do that, rather than move your own piece into its square to share space. But it is not hard to imagine that in rare circumstances you might want to pick a nonviolent approach, perhaps to avoid a stalemate.

The name “Pauli Chess”, is inspired by the Pauli exclusion principle, which says that no more than two electrons can occupy the same atomic orbital.

# Basics of Carbohydrates

Context I originally wrote this content years ago for teaching my children. I happened to stumble across it today when cleaning up some repos, and thought with the current “everyone is home schooling” situation around COVID-19, it would be a good time to publish it.

Everything your body does needs to be powered. In your computer, the power is electricity. In your body, the power comes from food. In plants, the power (usually) comes from the sun. We have three main kinds of energy in food: carbohydrates (carbs), lipids (fats), and proteins. This is about carbs.

Your body is made up of lots of different cells: brain cells, heart cells, muscle cells, skin cells. Every cell in your body can use glucose for energy. Let’s start off by understanding what glucose is made up of.

## Structure of glucose

Glucose is a molecule, which is made up of other atoms. In fact, it’s made up of 6 carbons C, 12 hydrogens H, and 6 oxygens O. Hydrogen and oxygen together make water (H2O), which in greek is hydro. That’s where the word carbohydrate comes from: carbon + hydro = carbohydrate.

Here are two different pictures of what the glucose molecule look like:

### Energy in glucose

Each time two atoms are connected to each other, it’s called a covalent bond. Each bond has some energy in it. This is called chemical energy. Your body needs to convert that chemical energy to energy it can use to move your muscles, let your brain think, and everything else. It does this by something called cellular respiration.

The next bit is like a math formula. Remember that glucose has 6 carbons, 12 hydrogens, and 6 oxygens. From now on, we’ll write that as C6H12O6. Cellular respiration combines the glucose with oxygen (O2) to make carbon dioxide (CO2) and water (H2O). We’ll play with the exact math later in the first exercise. The cool thing is: when you do this conversion, there are less covalent bonds at the end, so you free up energy.

Plants do the opposite: they take carbon dioxide and water, and combine it with energy from the sun using photosynthesis to make glucose and oxygen. That’s why animals breathe in oxygen and breathe out carbon dioxide, and plants do the opposite.

## Sugars

Glucose is one kind of a sugar. It’s actually a simple sugar, or a monosaccharide. Mono is greek for “one”, and saccharide is Latin for sugar. There are three different kinds of monosaccharides:

• Glucose
• Fructose
• Galactose

They all have the same number of carbons, hydrogens, and oxygens, but they have slightly different shapes. This is important, because it means that glucose can be used by any part of the body. However, fructose and galactose can only be used in the liver for producing energy.

There are also disaccharides. Di is Greek for “two,” and these are combinations of two simple sugars. Some common examples:

• Sucrose is a glucose and a fructose together
• Lactose is a glucose and a galactose together

When you hear of “sugar,” or see a package with sugar in it, it’s usually sucrose. It’s also called “table sugar.”

Lactose is also known as milk sugar, and is made by mother animals (mammals) for their babies. Fructose is sometimes called “fruit sugar,” because fruit has a lot of it.

## Polysaccharides

We can have more than just two sugars together. Poly is the Greek word for “many.” One kind of polysaccharide is starch, which is a long chain of glucose molecules. There are lots of foods that we eat that have a lot of starch in them:

• Grains, like wheat, oats, and rice
• Potatoes
• Sweet potatoes

When you eat starch (or, for that matter, disaccharides), your stomach and intenstines will break it down into the individual simple sugars, which then get absorbed into your blood. Because starch takes longer to break down to simple sugars, it gets absorbed more slowly into your blood stream.

Another kind of polysaccharide is something animals make, called glycogen. Glycogen is something your muscles and liver store for energy. It’s easy to turn into glucose, and can be used for quick energy. It’s really useful, for example, if you need to run really fast for a short time, or lift heavy weights. That’s why eating lots of carbs before lifting weights helps so much.

Our bodies are very good at digesting (breaking down and absorbing) starch. But there are other kinds of polysaccharides that our body can’t break down. These are known as cellulose, or fiber. Two different things can happen with these:

• We poop them out. This is why eating lots of fiber makes us go to the bathroom more.
• The bacteria in our intestines (gut microbiome) breaks it down and turns it into fatty acids that we can absorb. This gives us some energy from fat, and gives our microbiome some food so it can grow.

This is why eating fiber is good for you: it helps you go to the bathroom regularly, and have a healthy gut biome.

When we turn raw food, like wheat, into food we can eat, it’s called processing it, or refining it. Highly refined foods are processed a lot, and look less like the original raw food. For example, wheat has lots of fiber in it, and whole wheat bread keeps that fiber. But white bread is more processed, and has had the fiber taken out. Highly processed carbs, like white bread, crackers, and pretzels, are not very good for you: they have most of the fiber stripped out!

## Good and bad of fructose

Fruit has lots of fiber, micronutrients (vitamins and minerals), and water. It also has fructose, but mixed in with all of these other things. Fruit is good for you, and you should eat it.

There’s another cool thing about fructose: our tongues think that it tastes sweeter than glucose. Lots of companies will use fructose to make food taste better. And this is the bad side. With a little bit of fructose in your fruit, your body is fine.

However, when you eat lots of fructose, without the water and fiber that comes in the fruit, your liver has to do lots of extra work. The liver works very hard converting the fructose into other energy, like fat, that your body can use. This makes your liver tired, and takes away time from doing other things your body needs it to do.

Also, when you have too much fat stuck in your liver, your body can develop something very dangerous, called fatty liver disease. This can lead to insulin resistance, which can leads to lots of bad diseases, like diabetes, cancer, and heart disease.

Lesson: don’t eat too much extra sugar in your food, the extra fructose can make you very sick over time!

## Exercises

1. When you have cellular respiration, your body takes one gluclose (C6H12O6) and 6 oxygen molecules (O2) and makes some number of carbon dioxide (CO2) and water (H2O) molecules.

1. Figure out how many total carbon, hydrogen, and oxygen atoms are in the glucose and 6 oxygen molecules.

2. How many carbon dioxide molecules can you make out of that?

3. How much stuff is left? How much water can you make out of that?

4. Do you have any atoms left over?

2. What are the three monosaccharides?

3. Which sugar can be used in your whole body?

4. Where can the other two sugars be used in your body?

5. Which sugar do you find a lot of in fruit?

6. What is starch?

7. Which gets absorbed faster into your blood, glucose or starch?

• BONUS QUESTION: What are some good and bad things you can think of from getting absorbed faster?
8. What’s the name for the way animals store glucose in your livers and muscles?

9. What kind of carbohydrates are we bad at digesting? What kinds of good things do these do for us?

10. Name one of the bad diseases you can get from having too much fructose.

# The problem with adding functions to compact regions

The question why functions cannot be added to compact regions often comes up in GHC’s IRC channel, and because I don’t work on or with compact regions, every time I see the question I try to remember the reason for why functions can’t be moved to a compact region, often coming up with incorrect answers on the way. There’s also a question about this in GHC’s issue tracker.

The problem is documented briefly in the GHC source code, but in the following, I want to explain things in more detail, and in my own words.

At the core of the problem are top-level thunks, which are called CAFs (constant applicative forms) in GHC source code.1 Here’s an example:

x :: [Int]
x = fromEnumTo 1 1000000

When evaluated x allocates a cons cell on the heap and becomes a pointer (an “indirection”) to it.

Because CAFs are top-level closures, you might expect them to be alive during the lifetime of a program, but that’s not ideal because they sometimes allocate large amounts. In the example above, when we fully evaluate it we’ll have a million Ints and a million cons cells ([] is a static closure so it’s not allocated on the heap). A cons cell is 3 words, an Int is two words, so that’s 2M heap objects (which will have to be traversed by the GC in every major GC) and 40M of heap space.

So instead GHC tracks CAFs like any other heap-allocated object and reclaims the space when a CAF is no longer reachable from the program’s root set.

In the rest of this post, I will discuss how CAFs relate to compact regions, and in particular what this means for the possible inclusion of functions in compact regions.

### CAFs and compact regions

From GC perspective a compact region is a single object, and the GC does not traverse objects within compact regions. If any object inside a compact region is reachable then the whole region is retained. This means that I can’t have a pointer from a compact region to outside if that pointer needs to be tracked by the GC. So CAF references from compact regions are not allowed as they need to be tracked by the GC.

For constructors or when copying a top-level CAF directly this is still not an issue. Here’s an example where we move a CAF and a constructor that refers to a CAF to compact regions:

module Main where

import GHC.Compact

-- A CAF
x :: [Int]

data D = D [Int]

main = do
-- Adding a CAF to a compact region
_ <- compact x

-- Adding a constructor that refers to a CAF to a compact region
_ <- compact (D x)

return ()

This is fine because when copying a thunk (in this case, the CAF x) we evaluate it and copy the value instead. So in compact x above what’s copied is the fully evaluated value of x. Similarly in compact (D x) we copy the constructor, and for the first field we first evaluate the thunk and copy the value.

This process of evaluating thunks when copying effectively eliminates CAF references from objects when copying them to a compact region.

Why can we not do the same for functions? Because unlike constructors, functions refer to CAFs in their code, instead of payload.2 Here’s an example:

module Main where

import GHC.Compact

x :: [Int]

f :: () -> Int
f () = sum x

main = do
_ <- compact f -- This fails
return ()

Here f is a function with a CAF reference. Here’s its code in the final assembly:

.section .text
...
.globl Main.f_info
.type Main.f_info, @function
Main.f_info:
_c2vs:
leaq -16(%rbp),%rax
cmpq %r15,%rax
jb _c2vt
_c2vu:
movq $block_c2v5_info,-8(%rbp) movq %r14,%rbx addq$-8,%rbp
testb 7,%bl jne _c2v5 _c2v6: jmp *(%rbx) .align 8 .quad 0 .long 30 .long Main.x_closure-(block_c2v5_info)+0 block_c2v5_info: _c2v5: movlstg_INTLIKE_closure+257,%eax
movl $Main.x_closure,%ecx ... Note the references $Main.x_closure. If we wanted to copy this function to a compact region we’d have to update the code to replace these references to x’s value in the compact region, which is quite hard to do.

To avoid dealing with this we simply don’t allow copying functions to compact regions.

### What about functions with no CAF references?

Functions with no CAF references don’t have tracked references in their code so it’s fine to copy them to a compact region. I recently implemented a proof-of-concept here. Here’s an example from GHC’s test suite that would fail before, but passes with my patch:

module Main where

import GHC.Compact

data HiddenFunction = HiddenFunction (Int -> Int)

main = do
_ <- compact (HiddenFunction (+1))
return ()

While allowing functions with no CAF references works fine, it’s not too useful in practice. Before explaining why, here’s a definition:

A closure is CAFFY if it directly or transitively refers to a CAF.

CAFFY-ness is the main property we’re interested in. For example, we could have a function that doesn’t refer to a CAF directly, but if one of the functions it calls refers to a CAF then the function is CAFFY, and CAFFY functions can’t be copied to a compact region.

With this definition in mind, there are two problems with allowing non-CAFFY functions in compact regions:

• Most non-trivial functions are CAFFY, hence allowing non-CAFFY functions is not too useful
• CAFFY-ness is hard to control

As an evidence for the first point, here’s a simple function:

f :: IO ()
f = putStrLn "hi"

This trivial function is CAFFY, for the reasons related to code generation for string literals in GHC. It’s not hard to guess that if a function this simple is CAFFY then a lot of other function in practice will be CAFFY as well.

For the second point I’ll again just give an example:

f :: () -> Int
f () = sum x
where
x :: [Int]
x = fromEnumTo 1 1000000

Is this function CAFFY? The answer is complicated:

• It’s CAFFY with -O0 because -O0 implies -fignore-interface-pragmas, which means imported values (the Foldable dictionary in our example) are considered CAFFY.

• With -O0 -fno-ignore-interface-pragmas it’s not CAFFY.

• With -O it’s CAFFY again, because GHC generates this STG:

Main.f1 :: GHC.Types.Int
[GblId] =
{} \u []
case Main.$wgo 1# 0# of ww_s2yX [Occ=Once] { __DEFAULT -> GHC.Types.I# [ww_s2yX]; }; Main.f :: () -> GHC.Types.Int [GblId, Arity=1, Str=<S,1*H>, Unf=OtherCon []] = {} \r [ds_s2yY] case ds_s2yY of { () -> Main.f1; }; The problem is f now refers to f1, which is a CAF, so f is now CAFFY. In short, CAFFY-ness depends on things that are not in programmer’s control, like the CAFFY-ness of called functions, or how GHC’s simplifier will behave, which depends on many things, like optimization levels and code generation flags. Pretty much every GHC version will generate slightly different code, causing changes in CAFFY-ness. You’ll also get different CAFFY-ness properties in release builds compared to debug and test builds, because those are built with different GHC parameters. So even if we allowed non-CAFFY functions in compact regions, it’d be a bad practice to rely on this. Here’s another example, try to guess whether this is a CAF or not: (the language pragma should give a hint) {-# LANGUAGE NoMonomorphismRestriction #-} x = fromEnumTo 1 1000000 ### Conclusion While it’s possible to allow non-CAFFY functions in compact regions, I think it would be a bad practice to rely on this behavior, as it’s very difficult (even impossible in some cases) to predict and maintain CAFFY-ness. There are ways to allow CAFFY functions in compact regions, like • Capturing global references from a function in the function’s payload and referring to the payload instead of the global value directly. For example, in the f above, instead of referring to Main.x_closure directly, we could store Main.x_closure in the function’s payload, and refer to that instead. That way we could copy a function to a compact region similar to how we copy constructors. The problem is this is much less efficient than referring to the closure directly, and this is a cost that every function would have to pay, not just the ones that we want to add to a compact region. We could think about a pragma, say {-# COMPACTABLE f #-}, to generate “compactable” code for f where f’s top-level references would be implemented as I explained above. I think this is workable, but it’s still a lot of implementation effort. Also, any function that f directly or transitively refers to will have to be COMPACTABLE or non-CAFFY for this to work. • Compact regions could have dynamic SRTs where CAFFY references of objects would be added as they’re moved to a compact region. The GC would then track SRTs of compact regions. There may be others ways to make this work as well. The problem is certainly not unsolvable, but it requires significant time investment to get done, and so far there hasn’t been enough incentive for this. Finally, there are techniques like defunctionalization that makes it possible to express the same program using ADTs instead of functions, so not being able to add functions to compact regions is usually not a roadblock. 1. Note that “constant applicative form” is not enough to describe the problematic closures, only top-level CAFs that are represented as thunks are problem. For example, a top-level x = 1 :: Int is not a problem, because even though it’s a top-level CAF, it’s not a thunk.↩︎ 2. See this GHC wiki page for more details on GHC’s heap object layout.↩︎ ## March 23, 2020 ### Monday Morning Haskell # Blaze: Lightweight Html Generation We've now got a little experience dealing with Haskell and HTML. In our last article we saw how to use some basic combinators within Reflex FRP to generate HTML. But let's take a step back and consider this problem in a simpler light. What if we aren't doing a full Reflex app? What if we just want to generate an HTML string in the context of a totally different application? Suppose we're using some other library to run our backend and want to send some HTML as a raw string. How can we generate this string? We wouldn't go through the full effort of setting up a Nix application to run GHCJS and Reflex. We would like to do this with a simple Stack application. In the next couple weeks, we'll consider two simple libraries we can use to generate HTML code. This week, we'll look at the Blaze HTML library. Next week we'll consider Lucid. Then after that, we'll investigate how we can serve the HTML we generate from a Servant server. For some more ideas of production-ready libraries, download our Production Checklist! Try out some other platforms for database management or frontend development! ## Basic Combinators Let's start with the basics. Blaze has a few things in common with the Reflex method of generating HTML data. It also uses a monadic type to produce the HTML tree. In Blaze, this monad is just called Html. Each new action produces a new element node in the tree. Most every basic HTML element has its own function in the library. So we can start our tree with the basic html tag, and then provide a head element as well as a body. {-# LANGUAGE OverloadedStrings #-} import Text.Blaze.Html5 as H Import Text.Blaze.Html5.Attributes as A basicHtml :: Html basicHtml = html$ do
H.head $do H.title "My HTML page" body$ do
h1 "Welcome to our site!"

In some cases, the HTML element names conflict with Haskell library functions. So we use a qualified import with the letter H or A to be more specific.

The above example will produce the following HTML:

<html>
<title>My HTML Page</title>
<body>
<h1>Welcome to our site!"</h1>
</body>
</html>

We can get this as a string by using renderHtml from one of a few different modules in the library. For instance the "Pretty" renderer will give the above format, which is more human readable:

import Text.Blaze.Html.Renderer.Pretty

producePage :: String
producePage = renderHtml basicHtml

We can take our simple HTML now and add a few more elements. For instance, we can also add a "doctype" tag at the top, specifying that it is, in fact HTML. This saves us from needing the basic html combinator. We can also do nesting of different elements, such as lists:

basicHtml :: Html
basicHtml = docTypeHtml $do H.head$ do
H.title "My HTML page"
body $do h1 "Welcome to our site!" "This is just raw text" ul$ do
li "First item"
li "Second item"
li "Third item"

One final observation here is that we can use raw strings as a monadic element. We need the OverloadedStrings extension for this to work. This just makes a raw text item in the HTML tree, without any wrapper. See how the raw text appears in our output here:

<!DOCTYPE HTML>

<html>
<title>My HTML Page</title>
<body>
<h1>Welcome to our site!"</h1>
This is just raw text
<ul>
<li>First item</li>
<li>Second item</li>
<li>Third item</li>
</ul>
</body>
</html>

## Attributes

Now a key component of HTML is, of course, to use attributes with different items. This allows us to customize them with styles and various other properties. For example, when we use an image element, we should provide a "source" file as well as alternate text. We add different attributes to our items with the ! operator. This operator composes so we can add more attributes. Here is an example:

logoImage :: Html
logoImage = img ! src "logo.png" ! alt "The website's logo"

-- HTML

<img src="logo.png" alt="The website's logo"/>

Naturally, we'll want to use CSS with our page. In the head element we can add a stylesheet using a link element. Then we can apply classes to individual components using class_.

styledHtml :: Html
styledHtml = docTypeHtml $do H.head$ do
link ! rel "stylesheet" ! href "styles.css"
body $do div ! class_ "style-1"$ do
"One kind of div"
div ! class_ "style-2" $do "A second kind of div" ## Using Haskell to Populate Types Now since our Html elements are normal Haskell expressions, we can use any kind of Haskell type as an input. This can turn our elements into functions that depend on normal application data. For example, we can make a list out of different names: renderNames :: [String] -> Html renderNames names = do "Here are the names" ul$ forM_ names (li . toHtml)

We can also take a more complex data structure and use it as an input to our HTML elements. In this example, we'll show a user their points total if we have a User object. But if not, we'll encourage them to login instead.

data User = User
, userPoints :: Int
}

pointsDisplay :: Maybe User -> Html
pointsDisplay Nothing = a ! href "/login" $"Please login!" pointsDisplay (Just (User name points)) = div ! class_ "user-points"$ do
"Hi "
toHtml name
"!"
br
"You have "
toHtml points
" points!"

This sort of idea is at the heart of "server side rendering", which we'll explore later on in this series.

## Making a Form

Here's one final example, where we'll provide two different forms. One for creating a user account, and one for logging in. They each link to separate actions:

multiformPage :: Html
multiformPage = do
H.head $do H.title "Our Page" link ! rel "stylesheet" ! href "styles.css" body$ do
h1 "Welcome to our site!"
h2 $H.span "New user?" H.div ! class_ "create-user-form"$ do
H.form ! action "createUser" $do input ! type_ "text" ! name "username" input ! type_ "email" ! name "email" input ! type_ "password" ! name "password" input ! type_ "submit" ! name "submit" br h2$ H.span "Returning user?"
H.div ! class_ "login-user-form" $do H.form ! action "login"$ do
input ! type_ "email" ! name "email"
input ! type_ "submit" ! name "submit"

As we can see, monadic syntax gives us a very natural way to work with this kind of "tree building" operation.

## Conclusion

Now while we've reduced our dependencies from Reflex, this library does have limitations. There's no clear form of Haskell based dynamism. To make our page dynamic, we'd have to include Javascript files along with our generated HTML! And most of us Haskell developers don't want to be writing much Javascript if we can avoid it.

There are still other ways we can use functional means to get the Javascript we want, besides Reflex! We'll explore those a bit later on.

So Blaze has some limitations, but it serves its purpose well. It's a lightweight way of generating HTML in a very intuitive way. Next week, we'll explore another library, Lucid, that has a similar goal.

You can also take a look at our Github repository to see the full code example for this article!

# Data structure challenge: finding the rightmost empty slot

Suppose we have a sequence of slots indexed from 1 to $n$. Each slot can be either empty or full, and all start out empty. We want to repeatedly do the following operation:

• Given an index $i$, find the rightmost empty slot at or before index $i$, and mark it full.

We can also think of this in terms of two more fundamental operations:

• Mark a given index $i$ as full.
• Given an index $i$, find the greatest index $j$ such that $j \leq i$ and $j$ is empty (or $0$ if there is no such $j$).

The simplest possible approach would be to use an array of booleans; then marking a slot full is trivial, and finding the rightmost empty slot before index $i$ can be done with a linear scan leftwards from $i$. But the challenge is this:

Can you think of a data structure to support both operations in $O(\lg n)$ time or better?

You can think of this in either functional or imperative terms. I know of two solutions, which I’ll share in a subsequent post, but I’m curious to see what people will come up with.

Note that in my scenario, slots never become empty again after becoming full. As an extra challenge, what if we relax this to allow setting slots arbitrarily?

# The Ideal Mathematician

An intriguing essay by Philip J. David and Reuben Hirsch.
The ideal mathematicianâ€™s work is intelligible only to a small group of specialists, numbering a few dozen or at most a few hundred. This group has existed only for a few decades, and there is every possibility that it may become extinct in another few decades. However, the mathematician regards his work as part of the very structure of the world, containing truths which are valid forever, from the beginning of time, even in the most remote corner of the universe.

# Evolving Import Style For Diff Friendliness

They’re not fun. Imports are often noisy, lists are often huge, and diffs can be truly nightmarish to compare. Using a term often requires modifying the import list, which breaks your workflow. Fortunately, we can reduce some of the pain of these problems with a few choices in our stylish-haskell configuration and a script that gradually implements these changes in your codebase.

This post begins with a style recommendation, continues with a script to implement it gradually in your codebase, and finishes with a discussion on relevant import styles and how they affect review quality.

# The Blessed Style

I use stylish-haskell for my formatting tool. My editor’s default formatting choices with vim2hs work well for me (while I maintain that fork, it’s mostly a conglomeration of a bunch of changes that other people have made to it).

I have this shortcut defined to run stylish-haskell in vim:

" Haskell


This sets a mark, filters the file through stylish-haskell, and then returns to the mark.

stylish-haskell is configured by a .stylish-haskell.yaml file, and it will walk up the directory tree searching for one to configure the project with. I place mine in the root of the Haskell directory, right next to the stack.yaml or cabal.project files. Here are the contents that I recommend:

steps:
- imports:
align: none
list_align: with_module_name
long_list_align: new_line_multiline
empty_list_align: inherit
list_padding: 7 # length "import "
separate_lists: false
space_surround: false
- language_pragmas:
style: vertical
align: false
remove_redundant: true
- simple_align:
cases: false
top_level_patterns: false
records: false
- trailing_whitespace: {}

# You need to put any language extensions that's enabled for the entire project here.
language_extensions: []

# This is up to personal preference, but 80 is the right answer.
columns: 80


Let’s look at a diff that compares the default stylish-haskell and this configuration. I created a pull request against the servant-persistent example project to demonstrate the style. I left a bunch of review comments to explain the differences, and the UI for reading them is nice on GitHub. Here’s a reproduction of the differences:

- import           Init (runApp)
+ import Init (runApp)


We no longer indent so that module names are aligned. This helps keep the column count low, and makes it easier to just type this out manually without worrying about alignment.

{-# LANGUAGE DataKinds     #-}
{-# LANGUAGE DataKinds #-}


We don’t align on pragmas anymore. The diff will only show a new language pragma, not highlighting every line that was changed just to align the imports.

- import           Api.User             (UserAPI, userApi, userServer)
- import           Config               (AppT (..), Config (..))
+ import Api.User (UserAPI, userApi, userServer)
+ import Config (AppT(..), Config(..))


We no longer align the explicit import lists along the longest module name. This is less noisy, because adding a new module import that is longer than any others will no longer trigger a reformat across all the imports.

- import           Database.Persist.Postgresql (Entity (..), fromSqlKey, insert,
-                                               selectFirst, selectList, (==.))
+ import Database.Persist.Postgresql
+        (Entity(..), fromSqlKey, insert, selectFirst, selectList, (==.))


If the import and module goes beyond the column count, then the import list is indented, but is kept on one line. This keeps the import lists compact in the smallest cases, where it’s easier to notice a small change.

- import           Servant              ((:<|>) ((:<|>)), Proxy (Proxy), Raw,
-                                        Server, serve, serveDirectoryFileServer)
+ import Servant
+        ( (:<|>)((:<|>))
+        , Proxy(Proxy)
+        , Raw
+        , Server
+        , serve
+        , serveDirectoryFileServer
+        )


If a newline indented import list expands beyond the column count, then it’ll put each term on a new line. This takes up space, but it’s really easy to read, and the diff for adding or removing an import line points to exactly the change that was made.

- import           Config                      (AppT (..))
- import           Data.HashMap.Lazy           (HashMap)
- import           Data.Text                   (Text)
- import           Lens.Micro                  ((^.))
- import           Models                      (User (User), runDb, userEmail,
- import qualified Models                      as Md
- import qualified System.Metrics.Counter      as Counter
+ import Config (AppT(..))
+ import Data.HashMap.Lazy (HashMap)
+ import Data.Text (Text)
+ import Lens.Micro ((^.))
+ import Models (User(User), runDb, userEmail, userName)
+ import qualified Models as Md
+ import qualified System.Metrics.Counter as Counter


The end result is less pretty. It’s a little more cluttered to read. However, it dramatically improves diffs and merge conflicts when using qualified and explicit imports, which will improve the overall readability of the codebase significantly.

# Automating the Migration

You don’t want to shotgun the entire project with this, because that’ll cause a nightmare of merge conflicts for everyone until the dust settles. But if you did, you could write:

$stylish-haskell --inplace **/.hs  This is fine for small projects with few collaborators. But on large projects with many collaborators, we want to make this a bit more gentle. So instead, we’ll only require that files changed in a given PR are formatted. We can get that information using git diff --name-status origin/master. If your “target” remote and branch isn’t origin master then substitute whatever you use. The output of that command looks like this: M .stylish-haskell.yaml M Setup.hs M app/Main.hs M src/Api.hs M src/Api/User.hs M src/Config.hs M src/DevelMain.hs M src/Init.hs M src/Logger.hs M src/Models.hs M test/ApiSpec.hs M test/UserDbSpec.hs  All of these symbols are M, but you can also get A for additions and R for replacements/rewrites, and we’ll want to stylish those up too. We’ll handle these in three steps for these cases, because it’s easiest. The first case is simply M, and we can focus on that with grep "^M". We only want Haskell files, so we’ll filter on those with grep ".hs". We want to get the second field, so we’ll do cut -f 2. Finally, we’ll send all the elements as arguments to stylish-haskell --inplace using xargs. The whole command is here: git diff --name-status origin/master \ | grep .hs \ | grep "^M" \ | cut -f 2 \ | xargs stylish-haskell --inplace  Added files is the same, but you’ll have grep "^A" instead. Replaced/rewritten files are slightly different. Those have three fields - the type (R), the original filename, and the destination/new file name. We only want the new file name. So the script looks like this: # renamed files git diff --name-status origin/master \ | grep .hs \ | grep "^R" \ | cut -f 3 \ | xargs stylish-haskell --inplace  The only real difference is the cut -f 3 field. Our full script is: #!/usr/bin/env bash set -Eeux # modified files git diff --name-status origin/master \ | grep .hs \ | grep "^M" \ | cut -f 2 \ | xargs stylish-haskell --inplace # added files git diff --name-status origin/master \ | grep .hs \ | grep "^A" \ | cut -f 2 \ | xargs stylish-haskell --inplace # renamed files git diff --name-status origin/master \ | grep .hs \ | grep "^R" \ | cut -f 3 \ | xargs stylish-haskell --inplace  Save that somewhere as stylish-haskell.sh, and add an entry in your Makefile that references it (you do have a Makefile, right?). Now, we can run make stylish and it’ll format all imports that have changed in our PR, but it won’t touch anything else. Over time, the codebase will converge on the new style, but only as people are working on relevant changes. # Adding to CI We can add this to CI by calling the script and seeing if anything changed. git has an option --exit-code that will cause git to exit with a failure if there is a difference. In this snippet, I have some uncommitted changes: $ git diff --exit-code
diff --git a/Makefile b/Makefile
index f8d1636..df336de 100644
--- a/Makefile
+++ b/Makefile
@@ -6,4 +6,7 @@ ghcid-devel: ## Run the server in fast development mode. See DevelMain for detai
--command "stack ghci servant-persistent" \
--test "DevelMain.update"

-.PHONY: ghcid-devel help
+imports: ## Format all the imports that have changed since the master branch.
+
+.PHONY: ghcid-devel help imports

$echo$?
1


We can use this to fail CI. In Travis CI, we can add the following lines:

script:
- make imports
- git diff --exit-code
- stack --no-terminal --install-ghc test


You can adapt this to whatever CI setup you need. However, you’ll probably need to install stylish-haskell in CI, too. Your build tool can handle that, just ensure that it’s present on the PATH.

# Why this style?

The default style is really aesthetically nice. Everything lines up, there’s a lot of horizontal whitespace, it’s uncluttered looking. But it just doesn’t scale!

It doesn’t look good with long module names. It doesn’t look good with long explicit import lists. It causes a ton of irrelevant diff noise and needless merge conflicts. It becomes a hassle when you’re working on a large codebase with other people.

So let’s look at all the choices, their alternatives, and why I selected these.

steps:
- imports:
align: none


Alignment is visually appealing but it creates diff noise and it consumes columns with whitespace that would better be used with meaning.

      list_align: with_module_name


This option is superfluous, because we have selected new_line_multiline for long_list_align.

      pad_module_names: false


The docs for this give the justification quite nicely:

Right-pad the module names to align imports in a group:

• true: a little more readable

> import qualified Data.List       as List (concat, foldl, foldr,
>                                           init, last, length)
> import qualified Data.List.Extra as List (concat, foldl, foldr,
>                                           init, last, length)

• false: diff-safe

> import qualified Data.List as List (concat, foldl, foldr, init,
>                                     last, length)
> import qualified Data.List.Extra as List (concat, foldl, foldr,
>                                           init, last, length)


Default: true

Ultimately, diff-safe is preferable to aesthetics, so we go with that.

      long_list_align: new_line_multiline


long_list_align determines what happens when the import list goes over the maximum column count.

This is option a recent addition to the options. There are a few choices here, and you may actually prefer an even more diff-friendly approach than me. new_line_multiline will indent if the module and list exceeds the column length. If the new line list also exceeds the column length, then it’ll put every import on it’s own line. This is fantastic for diffs, but takes up a lot of space. It looks quite readable, at least.

      empty_list_align: inherit


This is a mostly irrelevant choice, since there is no alignment.

      list_padding: 7 # length "import "


This sets it up so that the import list clears the import , providing a clean visual break between lines. You could go longer or shorter, but that’s up to you.

      separate_lists: false


separate_lists adds a space between a class and it’s methods or a type and it’s constructors.

• true: There is single space between Foldable type and list of it’s functions.

import Data.Foldable (Foldable (fold, foldl, foldMap))

• false: There is no space between Foldable type and list of it’s functions.

import Data.Foldable (Foldable(fold, foldl, foldMap))

I like it off, but this can go either way.

      space_surround: false


This doesn’t really matter and can go either way. With multiline and now new_line_multiline, this is probably better to be true.

Space surround option affects formatting of import lists on a single line. The only difference is single space after the initial parenthesis and a single space before the terminal parenthesis.

• true: There is single space associated with the enclosing parenthesis.

import Data.Foo ( foo )

• false: There is no space associated with the enclosing parenthesis

import Data.Foo (foo)

Default: false

  - language_pragmas:
style: vertical
align: false
remove_redundant: true


I know it looks nice to have aligned pragmas, but it’s annoying to view a diff and not easily tell what pragmas were added or removed. THis makes it obvious.

  - simple_align:
cases: false
top_level_patterns: false
records: false


All of this visual alignment just ruins diffs. If you want visual alignment, align on an indentation boundary. Compare:

fromMaybe default maybeA = case maybeA of
Just a  -> a
Nothing -> default


This looks nice, but it’s annoying to maintain and change.

fromMaybe default maybeA =
case maybeA of
Just a ->
a
Nothing ->
default


You still get alignment of the important bits, but it’s now safe to diffs and refactoring.

Likewise, adding, removing, or changing a field to a record should only trigger a diff on the relevant fields. Anything else is noise that detracts from signal.

# Conclusion

Anyway, these are my recommendations for large projects that have multiple collaborators. If you’re working on a small project, then you don’t need to worry about anything here. These aren’t my aesthetic preferneces, but these formatting choices do annoy me a lot less than pretty code pleases me.

# The <- pure pattern

Summary: Sometimes <- pure makes a lot of sense, avoiding some common bugs.

In Haskell, in a monadic do block, you can use either <- to bind monadic values, or let to bind pure values. You can also use pure or return to wrap a value with the monad, meaning the following are mostly equivalent:

let x = myExpressionx <- pure myExpression

The one place they aren't fully equivalent is when myExpression contains x within it, for example:

let x = x + 1x <- pure (x + 1)

With the let formulation you get an infinite loop which never terminates, whereas with the <- pure pattern you take the previously defined x and add 1 to it. To solve the infinite loop, the usual solution with let is to rename the variable on the left, e.g.:

let x2 = x + 1

And now make sure you use x2 everywhere from now on. However, x remains in scope, with a more convenient name, and the same type, but probably shouldn't be used. Given a sequence of such bindings, you often end up with:

let x2 = x + 1let x3 = x2 + 1let x4 = x3 + 1...

Given a large number of unchecked indicies that must be strictly incrementing, bugs usually creep in, especially when refactoring. The unused variable warning will sometime catch mistakes, but not if a variable is legitimately used twice, but one of those instances is incorrect.

Given the potential errors, when a variable x is morally "changing" in a way that the old x is not longer useful, I find it much simpler to write:

x <- pure myExpression

The compiler now statically ensures we haven't fallen into the traps of an infinite loop (which is obvious and frustrating to track down) or using the wrong data (which is much harder to track down, and often very subtly wrong).

What I really want: What I actually think Haskell should have done is made let non-recursive, and had a special letrec keyword for recursive bindings (leaving where be recursive by default). This distinction is present in GHC Core, and would mean let was much safer.

What HLint does: HLint is very aware of the <- pure pattern, but also aware that a lot of beginners should be guided towards let. If any variable is defined more than once on the LHS of an <- then it leaves the do alone, otherwise it will suggest let for those where it fits.

Warnings: In the presence of mdo or do rec both formulations might end up being the same. If the left is a refutable pattern you change between error and fail, which might be quite different. Let bindings might be generalised. This pattern gives a warning about shadowed variables with -Wall.

# [mbkfwmkw] Prime frieze

Start with a raster of width w, where w is a primorial.  Number the pixels starting from zero, left to right within a row, and rows from the top to bottom, like the layout of English text.  Color the composites white and primes black.

Here is an example of width w = 5# = 5 * 3 * 2 = 30.  ("It's raining primes!")

Some columns are always multiples of an integer so are white or mostly white.  (Mostly white when they have only a single prime in first row.)  For example, the columns corresponding to 30k (the leftmost column) and 30k+4 are completely white, containing no primes.  The columns corresponding to 30k+2, 30k+3, and 30k+5 have only one prime each, namely 2, 3, and 5.  Eliminate the white and mostly white columns and mash the remaining columns next to each other.  We choose the starting width to be a primorial in order to be able to eliminate many white columns.

For our above example starting with 30 columns, 8 remain (the "remainder set") after removing white columns.  The 8 columns correspond to integers of the forms 30k + {1, 7, 11, 13, 17, 19, 23, 29}.

The number of columns that remain for various primorials is OEIS A005867.

What remains?  Are there patterns apparent in what remains?

Below is another example starting with w = 11# = 2310, leaving a remainder set of 480 columns after white columns are eliminated.  There are 270 rows, chosen to make the final aspect ratio the same as HD (480/270 = 1920/1080).  It's a nice coincidence that the number of columns of HD is a multiple of the remainder set size of 11 primorial.  There are primepi(11# * 270) - 5 = 50878 primes (black pixels) in the picture.  We subtract 5 because 2, 3, 5, 7, and 11 were in eliminated columns.

("Prime snow.")

Unlike the famous Ulam spiral, there doesn't seem to be any striking patterns or features.

I suspect questions about the clumpiness of the dots relate to the Riemann Hypothesis or Twin Prime Conjecture.  But we won't explore clumpiness any further at this time.

It is darker at the top and lighter at the bottom: primes thin out as log n (Prime Number Theorem).  We explore this gradual change in density.

Below, we started with a very tall 480x69120 remainder set image, primes less than 11# * 69120 = 159667200, or 8956716 primes.  We chopped the very tall image horizontally into 16 pieces (each piece 480x4320), rearranged the pieces into 16 columns, scaled the image 1/8, and increased contrast (pgmnorm).

We will call the 16 columns "macro columns" to distinguish them from columns of single pixels.  The boundaries between the first few macro columns are visible as the shading gets lighter.  Although primes thin out, they don't thin out very quickly.

We can imagine additional macro columns going out infinitely to the right.  Number all the macro columns left to right, starting from 0.  Instead of using macro columns 0 through 15 as we did above, we instead pick 16 non-contiguous macro columns [0, 1, 3, 7, 15, 31,... 2^15-1] and glue them next to each other.  This corresponds to a subset of the primes less than 11# * 4320 * 2^15 = 326998425600.  The macro column edges become (slightly) more visible because the image covers a greater range of prime densities.  (Mach banding further helps us see the boundaries.)

Previously, we considered sending these long strips of primes through a music box.

Below is an animated PNG (APNG) of primes thinning out over a large range.  Each frame is approximately 1000 times further along the number line.  We used apngasm to create the animation.

Below is the final frame of the animation, primes around 11# * 270 * (10^270 - 1).  Density has thinned out considerably.  There are 968 primes (black pixels) in this image.  ("It's full of stars!")  Contrast this with 50878 in the first frame.

In this directory are more images and Haskell source code used to generate the images.

(Update 2020-03-16: include link to image that inspired; fix formatting.)

# Reflex HTML Basics

Last week we used Nix to create a very simple application using the Reflex FRP framework. This framework uses the paradigm of Functional Reactive Programming to create web pages. It allows us to use functional programming techniques in a problem space with a lot of input and output.

In this week's article, we're going to start explore this framework some more. We'll start getting a feel for the syntax Reflex uses for making different HTML elements. Once we're familiar with these basics, we can compare Reflex with other frontend Haskell tools.

There are several different options you can explore for making these kinds of pages. For some more ideas, download our Production Checklist. This will also suggest some different libraries you can use for your web app's backend!

## A Main Function

Let's start out by looking at the code for the very basic page we made last week. It combines a few of the simplest functions we'll need to be familiar with in Reflex.

{-# LANGUAGE OverloadedStrings #-}

module Frontend.Index where

runIndex :: main ()
runIndex = mainWidget $el "div"$ text "Welcome to Reflex!"

There are three different functions here: mainWidget, el, and text. The mainWidget function is our interface between Reflex types and the IO monad. It functions a bit like a runStateT function, allowing us to turn our page into a normal program we can run. Here is its type signature:

mainWidget :: (forall t. Widget t ()) -> IO ()

We provide an input in some kind of a Widget monad and it will convert it to an IO action. The t parameter is one we'll use throughout our type signatures. Reflex FRP will implicitly track a lot of different events on our page over time. This parameter signifies a particular "timeline" of events.

We won't need to get into too much detail about the parameter. There's only one case where different expressions can have different t parameters. This would be if we have multiple Reflex apps at the same time, and we won't get into this case.

There are other main functions we can use. Most likely, we would want to use mainWidgetWithCss for an full project. This takes a CSS string to apply over our page. We'll want to use the embedFile template function here. This converts a provided filepath into the actual CSS ByteString.

mainWidgetWithCss :: ByteString -> (forall t. Widget t()) -> IO ()

runIndex = do
let cssString = $(embedFile "static/styles.css") mainWidgetWithCss cssString$ el "div" $text "Hello, Reflex!" ## Static Elements The rest of our combinators will have HTML oriented types. We'll start with our two simple combinators, text and el. These are both different kinds of "widgets" we can use. The first of these is straightforward enough. It takes a string (Text) and produces an element in a DomBuilder monad. The result of this will be a simple text element appearing on our webpage with nothing wrapping it. text :: (DomBuilder t m) => Text -> m () So for example if we omitted the use of el above, the HTML for our web page body would look like: <body> Welcome to Reflex! </body> The el combinator then provides us with the chance to wrap one HTML element within another. We provide a first argument with a string for the type of element we're wrapping with. Then we give the monadic action for the HTML element within. In the case of our page, we wrap our original text element with a div. el :: (DomBuilder t m) => Text -> m () -> m () runIndex = mainWidget$ el "div" $text "Welcome to Reflex!" This produces the following HTML in our body: <body> <div>Welcome to Reflex!</div> </body> Now, because an element takes a monad, we can compose more elements within it as deeply as we want. Here's an example with a couple nested lists: runIndex = mainWidget$ el "div" $do el "p" (text "Two Lists") el "ol"$ do
el "li" (text "Number One")
el "li" (text "Number Two")
el "li" (text "Number Three")
el "ul" $do el "li" (text "First Item") el "li" (text "Second Item") el "li" (text "Third Item") ## Adding Attributes Of course, there's more to HTML than creating elements. We'll also want to assign properties to our elements to customize their appearance. One simple way to do this is to use the elAttr combinator instead of el. This allows us to provide a map of attributes and values. Here's an example where we provide the filename, width, and height of an image element. Note that blank is the same as text "", an empty HTML element: imageElement = elAttr "image" ("src" =. "checkmark.jpg" <> "height" =. "300" <> "width" =. "300") blank -- Produced HTML <img src="checkmark.jpg" height="300" width="300"></img> Reflex has some specific combinators we can use to build an attribute map. The =. operator combines two elements to create a singleton map. We can append different maps with the monoid operator <>. In general, we should handle CSS with static files elsewhere. We would create CSS classes that contain many different properties. We can then apply these classes to our HTML elements. The elClass combinator is an easy way to do thing in Reflex. styledText = elClass "p" "fancy" (text "Hello") -- Produced HTML <p class="fancy">Hello</p> Now we don't need to worry about styling every individual element. ## Conclusion We already have quite a few opportunities available to us to build our page. Still, it was a big hassle to use Nix and Reflex just to write some Html. Next week, we'll start exploring more lightweight options for doing this in Haskell. For more resources on building Haskell web tools, download our Production Checklist! ## April 16, 2019 ### Oskar Wickström # Property-Based Testing in a Screencast Editor, Case Study 2: Video Scene Classification In the last case study on property-based testing (PBT) in Komposition we looked at timeline flattening. This post covers the video classifier, how it was tested before, and the bugs I found when I wrote property tests for it. If you haven’t read the introduction or the first case study yet, I recommend checking them out! ## Classifying Scenes in Imported Video Komposition can automatically classify scenes when importing video files. This is a central productivity feature in the application, effectively cutting recorded screencast material automatically, letting the user focus on arranging the scenes of their screencast. Scenes are segments that are considered moving, as opposed to still segments: • A still segment is a sequence of at least $$S$$ seconds of near-equal frames • A moving segment is a sequence of non-equal frames, or a sequence of near-equal frames with a duration less than $$S$$ $$S$$ is a preconfigured minimum still segment duration in Komposition. In the future it might be configurable from the user interface, but for now it’s hard-coded. Equality of two frames $$f_1$$ and $$f_2$$ is defined as a function $$E(f_1, f_2)$$, described informally as: • comparing corresponding pixel color values of $$f_1$$ and $$f_2$$, with a small epsilon for tolerance of color variation, and • deciding two frames equal when at least 99% of corresponding pixel pairs are considered equal. In addition to the rules stated above, there are two edge cases: 1. The first segment is always a considered a moving segment (even if it’s just a single frame) 2. The last segment may be a still segment with a duration less than $$S$$ The second edge case is not what I would call a desirable feature, but rather a shortcoming due to the classifier not doing any type of backtracking. This could be changed in the future. ## Manually Testing the Classifier The first version of the video classifier had no property tests. Instead, I wrote what I thought was a decent classifier algorithm, mostly messing around with various pixel buffer representations and parallel processing to achieve acceptable performance. The only type of testing I had available, except for general use of the application, was a color-tinting utility. This was a separate program using the same classifier algorithm. It took as input a video file, and produced as output a video file where each frame was tinted green or red, for moving and still frames, respectively. In the recording above you see the color-tinted output video based on a recent version of the classifier. It classifies moving and still segments rather accurately. Before I wrote property tests and fixed the bugs that I found, it did not look so pretty, flipping back and forth at seemingly random places. At first, debugging the classifier with the color-tinting tool way seemed like a creative and powerful technique. But the feedback loop was horrible, having to record video, process it using the slow color-tinting program, and inspecting it by eye. In hindsight, I can conclude that PBT is far more effective for testing the classifier. ## Video Classification Properties Figuring out how to write property tests for video classification wasn’t obvious to me. It’s not uncommon in example-based testing that tests end up mirroring the structure, and even the full implementation complexity, of the system under test. The same can happen in property-based testing. With some complex systems it’s very hard to describe the correctness as a relation between any valid input and the system’s observed output. The video classifier is one such case. How do I decide if an output classification is correct for a specific input, without reimplementing the classification itself in my tests? The other way around is easy, though! If I have a classification, I can convert that into video frames. Thus, the solution to the testing problem is to not generate the input, but instead generate the expected output. Hillel Wayne calls this technique “oracle generators” in his recent article.1 The classifier property tests generate high-level representations of the expected classification output, which are lists of values describing the type and duration of segments. Next, the list of output segments is converted into a sequence of actual frames. Frames are two-dimensional arrays of RGB pixel values. The conversion is simple: • Moving segments are converted to a sequence of alternating frames, flipping between all gray and all white pixels • Still frames are converted to a sequence of frames containing all black pixels The example sequence in the diagram above, when converted to pixel frames with a frame rate of 10 FPS, can be visualized like in the following diagram, where each thin rectangle represents a frame: By generating high-level output and converting it to pixel frames, I have input to feed the classifier with, and I know what output it should produce. Writing effective property tests then comes down to writing generators that produce valid output, according to the specification of the classifier. In this post I’ll show two such property tests. ## Testing Still Segment Minimum Length As stated in the beginning of this post, classified still segments must have a duration greater than or equal to $$S$$, where $$S$$ is the minimum still segment duration used as a parameter for the classifier. The first property test we’ll look at asserts that this invariant holds for all classification output. hprop_classifies_still_segments_of_min_length = property$ do

-- 1. Generate a minimum still segment length/duration
minStillSegmentFrames <- forAll $Gen.int (Range.linear 2 (2 * frameRate)) let minStillSegmentTime = frameCountDuration minStillSegmentFrames -- 2. Generate output segments segments <- forAll$
genSegments (Range.linear 1 10)
(Range.linear 1
(minStillSegmentFrames * 2))
(Range.linear minStillSegmentFrames
(minStillSegmentFrames * 2))
resolution

-- 3. Convert test segments to actual pixel frames
let pixelFrames = testSegmentsToPixelFrames segments

-- 4. Run the classifier on the pixel frames
let counted = classifyMovement minStillSegmentTime (Pipes.each pixelFrames)
& Pipes.toList
& countSegments

-- 5. Sanity check
countTestSegmentFrames segments === totalClassifiedFrames counted

-- 6. Ignore last segment and verify all other segments
case initMay counted of
Just rest ->
traverse_ (assertStillLengthAtLeast minStillSegmentTime) rest
Nothing -> success
where
resolution = 10 :. 10

This chunk of test code is pretty busy, and it’s using a few helper functions that I’m not going to bore you with. At a high level, this test:

1. Generates a minimum still segment duration, based on a minimum frame count (let’s call it $$n$$) in the range $$[2, 20]$$. The classifier currently requires that $$n \geq 2$$, hence the lower bound. The upper bound of 20 frames is an arbitrary number that I’ve chosen.
2. Generates valid output segments using the custom generator genSegments, where
• moving segments have a frame count in $$[1, 2n]$$, and
• still segments have a frame count in $$[n, 2n]$$.
3. Converts the generated output segments to actual pixel frames. This is done using a helper function that returns a list of alternating gray and white frames, or all black frames, as described earlier.
4. Count the number of consecutive frames within each segment, producing a list like [Moving 18, Still 5, Moving 12, Still 30].
5. Performs a sanity check that the number of frames in the generated expected output is equal to the number of frames in the classified output. The classifier must not lose or duplicate frames.
6. Drops the last classified segment, which according to the specification can have a frame count less than $$n$$, and asserts that all other still segments have a frame count greater than or equal to $$n$$.

Let’s run some tests.

> :{
| hprop_classifies_still_segments_of_min_length
|   & Hedgehog.withTests 10000
|   & Hedgehog.check
| :}
✓ <interactive> passed 10000 tests.

Cool, it looks like it’s working.

## Sidetrack: Why generate the output?

Now, you might wonder why I generate output segments first, and then convert to pixel frames. Why not generate random pixel frames to begin with? The property test above only checks that the still segments are long enough!

The benefit of generating valid output becomes clearer in the next property test, where I use it as the expected output of the classifier. Converting the output to a sequence of pixel frames is easy, and I don’t have to state any complex relation between the input and output in my property. When using oracle generators, the assertions can often be plain equality checks on generated and actual output.

But there’s benefit in using the same oracle generator for the “minimum still segment length” property, even if it’s more subtle. By generating valid output and converting to pixel frames, I can generate inputs that cover the edge cases of the system under test. Using property test statistics and coverage checks, I could inspect coverage, and even fail test runs where the generators don’t hit enough of the cases I’m interested in.2

Had I generated random sequences of pixel frames, then perhaps the majority of the generated examples would only produce moving segments. I could tweak the generator to get closer to either moving or still frames, within some distribution, but wouldn’t that just be a variation of generating valid scenes? It would be worse, in fact. I wouldn’t then be reusing existing generators, and I wouldn’t have a high-level representation that I could easily convert from and compare with in assertions.

## Testing Moving Segment Time Spans

The second property states that the classified moving segments must start and end at the same timestamps as the moving segments in the generated output. Compared to the previous property, the relation between generated output and actual classified output is stronger.

hprop_classifies_same_scenes_as_input = property $do -- 1. Generate a minimum still still segment duration minStillSegmentFrames <- forAll$ Gen.int (Range.linear 2 (2 * frameRate))
let minStillSegmentTime = frameCountDuration minStillSegmentFrames

-- 2. Generate test segments
segments <- forAll genSegments (Range.linear 1 10) (Range.linear 1 (minStillSegmentFrames * 2)) (Range.linear minStillSegmentFrames (minStillSegmentFrames * 2)) resolution -- 3. Convert test segments to actual pixel frames let pixelFrames = testSegmentsToPixelFrames segments -- 4. Convert expected output segments to a list of expected time spans -- and the full duration let durations = map segmentWithDuration segments expectedSegments = movingSceneTimeSpans durations fullDuration = foldMap unwrapSegment durations -- 5. Classify movement of frames let classifiedFrames = Pipes.each pixelFrames & classifyMovement minStillSegmentTime & Pipes.toList -- 6. Classify moving scene time spans let classified = (Pipes.each classifiedFrames & classifyMovingScenes fullDuration) >-> Pipes.drain & Pipes.runEffect & runIdentity -- 7. Check classified time span equivalence expectedSegments === classified where resolution = 10 :. 10 Steps 1–3 are the same as in the previous property test. From there, this test: 1. Converts the generated output segments into a list of time spans. Each time span marks the start and end of an expected moving segment. Furthermore, it needs the full duration of the input in step 6, so that’s computed here. 2. Classify the movement of each frame, i.e. if it’s part of a moving or still segment. 3. Run the second classifier function called classifyMovingScenes, based on the full duration and the frames with classified movement data, resulting in a list of time spans. 4. Compare the expected and actual classified list of time spans. While this test looks somewhat complicated with its setup and various conversions, the core idea is simple. But is it effective? ### Bugs! Bugs everywhere! Preparing for a talk on property-based testing, I added the “moving segment time spans” property a week or so before the event. At this time, I had used Komposition to edit multiple screencasts. Surely, all significant bugs were caught already. Adding property tests should only confirm the level of quality the application already had. Right? Nope. First, I discovered that my existing tests were fundamentally incorrect to begin with. They were not reflecting the specification I had in mind, the one I described in the beginning of this post. Furthermore, I found that the generators had errors. At first, I used Hedgehog to generate the pixels used for the classifier input. Moving frames were based on a majority of randomly colored pixels and a small percentage of equally colored pixels. Still frames were based on a random single color. The problem I had not anticipated was that the colors used in moving frames were not guaranteed to be distinct from the color used in still frames. In small-sized examples I got black frames at the beginning and end of moving segments, and black frames for still segments, resulting in different classified output than expected. Hedgehog shrinking the failing examples’ colors towards 0, which is black, highlighted this problem even more. I made my generators much simpler, using the alternating white/gray frames approach described earlier, and went on to running my new shiny tests. Here’s what I got: What? Where does 0s–0.6s come from? The classified time span should’ve been 0s–1s, as the generated output has a single moving scene of 10 frames (1 second at 10 FPS). I started digging, using the annotate function in Hedgehog to inspect the generated and intermediate values in failing examples. I couldn’t find anything incorrect in the generated data, so I shifted focus to the implementation code. The end timestamp 0.6s was consistently showing up in failing examples. Looking at the code, I found a curious hard-coded value 0.5 being bound and used locally in classifyMovement. The function is essentially a fold over a stream of frames, where the accumulator holds vectors of previously seen and not-yet-classified frames. Stripping down and simplifying the old code to highlight one of the bugs, it looked something like this: classifyMovement minStillSegmentTime = case ... of InStillState{..} -> if someDiff > minEqualTimeForStill then ... else ... InMovingState{..} -> if someOtherDiff >= minStillSegmentTime then ... else ... where minEqualTimeForStill = 0.5 Let’s look at what’s going on here. In the InStillState branch it uses the value minEqualTimeForStill, instead of always using the minStillSegmentTime argument. This is likely a residue from some refactoring where I meant to make the value a parameter instead of having it hard-coded in the definition. Sparing you the gory implementation details, I’ll outline two more problems that I found. In addition to using the hard-coded value, it incorrectly classified frames based on that value. Frames that should’ve been classified as “moving” ended up “still”. That’s why I didn’t get 0s–1s in the output. Why didn’t I see 0s–0.5s, given the hard-coded value 0.5? Well, there was also an off-by-one bug, in which one frame was classified incorrectly together with the accumulated moving frames. The classifyMovement function is 30 lines of Haskell code juggling some state, and I managed to mess it up in three separate ways at the same time. With these tests in place I quickly found the bugs and fixed them. I ran thousands of tests, all passing. Finally, I ran the application, imported a previously recorded video, and edited a short screencast. The classified moving segments where notably better than before. ## Summary A simple streaming fold can hide bugs that are hard to detect with manual testing. The consistent result of 0.6, together with the hard-coded value 0.5 and a frame rate of 10 FPS, pointed clearly towards an off-by-one bug. I consider this is a great showcase of how powerful shrinking in PBT is, consistently presenting minimal examples that point towards specific problems. It’s not just a party trick on ideal mathematical functions. Could these errors have been caught without PBT? I think so, but what effort would it require? Manual testing and introspection did not work for me. Code review might have revealed the incorrect definition of minEqualTimeForStill, but perhaps not the off-by-one and incorrect state handling bugs. There are of course many other QA techniques, I won’t evaluate all. But given the low effort that PBT requires in this setting, the amount of problems it finds, and the accuracy it provides when troubleshooting, I think it’s a clear win. I also want to highlight the iterative process that I find naturally emerges when applying PBT: 1. Think about how your system is supposed to work. Write down your specification. 2. Think about how to generate input data and how to test your system, based on your specification. Tune your generators to provide better test data. Try out alternative styles of properties. Perhaps model-based or metamorphic testing fits your system better. 3. Run tests and analyze the minimal failing examples. Fix your implementation until all tests pass. This can be done when modifying existing code, or when writing new code. You can apply this without having any implementation code yet, perhaps just a minimal stub, and the workflow is essentially the same as TDD. ## Coming Up The final post in this series will cover testing at a higher level of the system, with effects and multiple subsystems being integrated to form a full application. We will look at property tests that found many bugs and that made a substantial refactoring possible. 1. Introduction 2. Timeline Flattening 3. Video Scene Classification 4. Integration Testing Until then, thanks for reading! ## Credits Thank you Ulrik Sandberg, Pontus Nagy, and Fredrik Björeman for reviewing drafts of this post. ## Footnotes 1. See the “Oracle Generators” section in Finding Property Tests.↩︎ 2. John Hughes’ talk Building on developers’ intuitions goes into depth on this. There’s also work being done to provide similar functionality for Hedgehog.↩︎ ## March 12, 2020 ### Philip Wadler # Try out the new Mandelbrot Maps, Part II Another one of my honours project students, Freddie Bawden, has also done a great job with an update to Mandelbrot Maps. He's looking for feedback. Try it out! For my final year project Iâ€™ve build an interactive fractal viewer using WebAssembly and Web Workers to create a multithreaded renderer. You can try it now mmaps.freddiejbawden.com! Feedback can be left at mmaps.freddiejbawden.com/feedback and is greatly appreciated. Thanks! ### Joey Hess # watch me program for half an hour In this screencast, I implement a new feature in git-annex. I spend around 10 minutes writing haskell code, 10 minutes staring at type errors, and 10 minutes writing documentation. A normal coding session for me. I give a play-by-play, and some thoughts of what programming is like for me these days. Not shown is the hour I spent the next day changing the "optimize" subcommand implemented here into "--auto" options that can be passed to git-annex's get and drop commands. watched it all, liked it (60%) watched some, boring (8%) too long for me (3%) too haskell for me (14%) not interested (13%) Total votes: 105 ### Tweag I/O # Inferred or Specified Types? Your Choice! Gert-Jan Bottu During my internship at Tweag, I got the opportunity to work on the GHC Haskell compiler, under the mentorship of Richard Eisenberg. For the first third of my internship, I tackled the implementation of Proposal 99. This proposal introduces additional syntax to the language, allowing programmers to manually annotate type variables with their specificity. In this blog post, I will describe specificity, the proposal features, and why it could prove useful to you as a developer. In order to tackle this first question, let us look at the title of the proposal: "Explicit specificity in type variable binders". ## Specificity? In order to explain what specificity means, we first have to take a step back and look at type applications. ### Type Applications The TypeApplications language extension was introduced in GHC 8.0.1, in 2016, and is currently used by over 900 packages on Hackage. The original paper by Richard Eisenberg et al. does a great job of motivating and explaining the feature, but it boils down to this (example taken from the paper): Imagine we want to write a function normalize which parses a String and then pretty-prints the result. The function could for example, remove any redundant brackets in expressions. Writing this function however, is not trivial. A first version could look like this, using the predefined read :: forall a. Read a => String -> a and show :: forall a. Show a => a -> String: normalize :: forall a. (Show a, Read a) => String -> String normalize s = show (read s)  However, this code is (rightly) rejected, as GHC can't infer the output type of read, making the code ambiguous. We can solve this by manually instantiating the polymorphic type of read as follows: normalize :: forall a. (Show a, Read a) => String -> String normalize s = show (read @a s)  This instantiates the return type of read to be the type variable a as declared in the type signature of normalize. Note that as the type has yet to be instantiated, visible type application is required at the call site. ### Specificity So now that we know what visible type application looks like, let's give it a shot for ourselves. We'll just enable the language flag, write the first simple polymorphic function that comes to mind, and try instantiating it: {-# LANGUAGE TypeApplications #-} module Main where id' x = x id_int = id' @Int main = putStrLn "Hi there!"  Let's load it up in GHCi: /home/gertjan/Desktop/TypeApp.hs:7:10: error: • Cannot apply expression of type ‘p0 -> p0’ to a visible type argument ‘Int’ • In the expression: id' @Int In an equation for ‘id_int’: id_int = id' @Int | 7 | id_int = id' @Int | ^^^^^^^^  Hmm, that's a bit strange. Instantiating the predefined id function works fine though: id_int = id @Int  So what is really going on here? Well, let's ask GHCi to clarify by showing the types, setting -fprint-explicit-foralls so that we can see the type variable binders. *Main> :set -fprint-explicit-foralls *Main> :info id id :: forall a. a -> a *Main> :info id' id' :: forall {p}. p -> p  What's happening here is that, because I did not provide a type annotation for my id' function, GHC effectively treats its type fundamentally differently from the type of the (annotated) predefined id function, defined as follows in base: id :: a -> a id x = x  Since the predefined version of id has a type signature which explicitly abstracts over the type variable a, this variable is marked as specified. On the other hand, our id' function does not have a type signature, making its type variable p inferred, as shown by the braces in the above example. So why make this distinction? Because the lack of a type signature makes inferred variable binders inherently a bit unstable. After all, who is to say that the next update of GHC won't alter the order of the inferred foralls, for example? For this reason, type application is limited to specified variables only: inferred type variables cannot be manually instantiated. How do other programming languages handle this issue of unstable inferred type variables? Languages like Agda, Java and C++ do feature type instantiation, but as none of them automatically generalises functions without user-defined type signatures, generics or templates, respectively: they do not have inferred type variables. On the other hand, languages like ML do infer polymorphic types (with inferred type variables as a consequence), but do not have syntax for visible type application. Finally, Idris is the only language that I know of that both generalises and features type instantiation. As such, Idris takes a very similar approach to GHC's: while it does not show this distinction to the programmer, the compiler differentiates between types variables that arise from a user annotation and those that do not, and only allows instantiation of the former. You can find an example of this here. ## Explicit Specificity? To recap: writing a type signature makes the type variables specified, and thus available for instantiation. While the rule seems sensible, this is not always what we might want. Consider the following type signature: {-# LANGUAGE PolyKinds, KindSignatures #-} import Type.Reflection typeRep :: Typeable (a :: k) => TypeRep (a :: k)  The example is adapted from A Reflection on Types by Peyton Jones et al. For the purposes of this blog post, it is not important to go into too much detail on typeRep, besides looking at its type variables. While writing this function, the programmer wants to annotate it as kind polymorphic, by explicitly mentioning the k kind variable. Unfortunately, the full type of typeRep now becomes forall k (a :: k). Typeable a => TypeRep a. The k variable has become specified, which means that in order to instantiate a, users of our function always have to instantiate k first. Inferring the kind is trivial, which makes having to write something like typeRep @Type @Int or even typeRep @_ @Int in order to instantiate a quite silly. The new explicit specificity extension allows us to manually annotate type variables we want to act as inferred variables. We do this by placing the braces around the variables we want to be inferred: typeRep' :: forall {k} (a :: k). Typeable a => TypeRep a  ## In summary Writing a type signature for your functions is always a good idea, as it serves both as a check of your code and as documentation. Writing a type signature is also necessary for -XScopedTypeVariables. However, writing a type signature, in current GHC, forces all the variables occurring in the signature to be specified. This alters the function's interface! Explicit specificity remedies this and lets you write exactly the type signature you want. The original proposal features additional use cases. I encourage anyone who might be interested to have a look at the proposal. ## How can I use this? The code is currently under review and will be released in a future GHC version. If you want to try some examples out for yourself, we have a Docker container available with this development build of GHC. Just clone the container with docker pull gertjanb/explicit-specificity-ghc. Once the container has finished downloading, you can launch your new development GHC build using docker run -it gertjanb/explicit-specificity-ghc, and start experimenting. For example, a good way to start is by defining a simple function (don't forget to enable the -XRankNTypes extension): Prelude> :set -XRankNTypes Prelude> let foo :: forall a {b}. a -> b -> b ; foo x y = y  Play with the type signature, see what type GHCi assigns to foo, and try instantiating the type variables. Have a look at the GHC proposal to find out more, and see where the new syntax is and isn't allowed. Have fun! ## Will my code break? For regular Haskell code, the new feature is entirely backwards compatible with existing code bases, since it only introduces new syntax. The update won't change the behaviour of your code in any way, unless you decide to utilise the new syntax. The same can't be said for Template Haskell code, unfortunately. As this new syntax is available in Template Haskell as well, the language AST had to be altered to allow for passing this additional information around. Concretely, TyVarBndrs are now annotated with a flag to store additional information. In the case of forall-types and data constructors, they are annotated with their Specificity which is either SpecifiedSpec or InferredSpec. Updating your code should thus be as simple as following the type-checker and updating the types you wrote with the correct flag. ## Closing Remarks I'm grateful to Tweag for supporting this internship, for encouraging me to work on exciting projects like this and for introducing me to interesting people who share my passion for functional programming. I'd also like to thank Richard Eisenberg for his mentorship, his insights and his enthusiasm for the topic. Finally, I hope you, the reader, enjoyed reading this post. Expect a follow-up post to be arriving soon! :) ### Jasper Van der Jeugt # Visual Arrow Syntax Not to be taken seriously. Haskell is great building at DSLs â€“ which are perhaps the ultimate form of slacking off at work. Rather than actually doing the work your manager tells you to, you can build DSLs to delegate this back to your manager so you can focus on finally writing up that GHC proposal for MultilinePostfixTypeOperators (which could have come in useful for this blogpost). So, weâ€™ll build a visual DSL thatâ€™s so simple even your manager can use it! This blogpost is a literate Haskell file so you can run it directly in GHCi. Note that some code is located in a second module because of compilation stage restrictions. Letâ€™s get started. Weâ€™ll need a few language extensions â€“ not too much, just enough to guarantee job security for the forseeable future. {-# LANGUAGE DataKinds #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE KindSignatures #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE PolyKinds #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} module Visual where And then some imports, not much going on here. import qualified Codec.Picture as JP import qualified Codec.Picture.Types as JP import Control.Arrow import Control.Category import Control.Monad.ST (runST) import Data.Char (isUpper) import Data.Foldable (for_) import Data.List (sort, partition) import qualified Language.Haskell.TH as TH import Prelude hiding (id, (.)) All Haskell tutorials that use some form of dependent typing seem to start with the HList type. So I suppose weâ€™ll do that as well. data HList (things :: [*]) where Nil :: HList '[] Cons :: x -> HList xs -> HList (x ': xs) I think HList is short for hype list. Thereâ€™s a lot of hype around this because it allows you to put even more types in your types. Weâ€™ll require two auxiliary functions for our hype list. Because of all the hype, they each require a type family in order for us to even express their types. The first one just takes the last element from a list. hlast :: HList (thing ': things) -> Last (thing ': things) hlast (Cons x Nil) = x hlast (Cons _ (Cons y zs)) = hlast (Cons y zs) type family Last (l :: [*]) :: * where Last (x ': '[]) = x Last (x ': xs) = Last xs Readers may wonder if this is safe, since last is usually a partial function. Well, it turns out that partial functions are safe if you type them using partial type families. So one takeaway is that partial functions can just be fixed by adding more partial stuff on top. This explains things like Prelude. Anyway, the second auxiliary function drops the last element from a list. hinit :: HList (thing ': things) -> HList (Init (thing ': things)) hinit (Cons _ Nil) = Nil hinit (Cons x (Cons y zs)) = Cons x (hinit (Cons y zs)) type family Init (l :: [*]) :: [*] where Init (_ ': '[]) = '[] Init (x ': y ': zs) = x ': Init (y ': zs) And thatâ€™s enough boilerplate! Letâ€™s get right to it. Itâ€™s always good to pretend that your DSL is built on solid foundations. As I alluded to in the title, weâ€™ll pick Arrows. One reason for that is that theyâ€™re easier to explain to your manager than Applicative (stuff goes in, other stuff comes out, see? Theyâ€™re like the coffee machine in the hallway). Secondly, they are less powerful than Monads and we prefer to keep that good stuff to ourselves. Unfortunately, it seems like the Arrow module was contributed by an operator fetishism cult, and anyone whoâ€™s ever done non-trivial work with Arrows now has a weekly therapy session to talk about how &&& and *** hurt them. This is not syntax we want anyone to use. Instead, weâ€™ll, erm, slightly bend Haskellâ€™s syntax to get something that is â€œmuch nicerâ€� and â€œdefinitely not an abominationâ€�. Weâ€™ll build something that appeals to both Category Theorists (for street cred) and Corporate Managers (for our bonus). These two groups have many things in common. Apart from talking a lot about abstract nonsense and getting paid for it, both love drawing boxes and arrows. Yeah, so I guess we can call this visual DSL a Diagram. The main drawback of arrows is that they can only have a single input and output. This leads to a lot of tuple abuse. Weâ€™ll â€œfixâ€� that by having extra ins and outs. We are wrapping an arbitrary Arrow, referred to as f in the signature: data Diagram (ins :: [*]) (outs :: [*]) f a b where We can create a diagram from a normal arrow, thatâ€™s easy.  Diagram :: f a b -> Diagram '[] '[] f a b And we can add another normal function at the back. No biggie.  Then :: Diagram ins outs f a b -> f b c -> Diagram ins outs f a c Of course, we need to be able to use our extra input and outputs. Output wraps an existing Diagram and redirects the second element of a tuple to the outs; and Input does it the other way around.  Output :: Diagram ins outs f a (b, o) -> Diagram ins (o ': outs) f a b  Input :: Diagram ins outs f a b -> Diagram (i ': ins) outs f a (b, i) The hardest part is connecting two existing diagrams. This is really where the magic happens:  Below :: Diagram ins1 outs1 f a b -> Diagram (Init (b ': outs1)) outs2 f (Last (b ': outs1)) c -> Diagram ins1 outs2 f a c Is this correct? What does it even mean? The answer to both questions is: â€œI donâ€™t knowâ€�. It typechecks, which is what really matters when youâ€™re doing Haskell. And thereâ€™s something about ins matching outs in there, yeah. Concerned readers of this blog may at this point be wondering why we used reasonable names for the constructors of Diagram rather than just operators. Well, itâ€™s only because itâ€™s a GADT which makes this impossible. But fear not, we can claim our operators back. Shout out to Unicodeâ€™s Box-drawing characters: they provide various charaters with thick and thin lines. This lets us do an, uhm, super intuitive syntax where tuples are taken apart as extra inputs/outputs, or reified back into tuples. (â”�â–º) = Then l â”­â–º r = Output l â”�â–º r l â”³â–º r = (l â”�â–º arr (\x -> (x, x))) â”­â–º r l â”¶â–º r = Input l â”�â–º r l â•†â–º r = Output (Input l â”�â–º arr (\x -> (x, x))) â”�â–º r l â”³ c = l â”³â–º arr (const c) l â”“ r = Below l r l â”§ r = Input l â”“ r l â”ƒ r = Input l â”�â–º arr snd â”“ r infixl 5 â”�â–º, â”³â–º, â”­â–º, â”¶â–º, â•†â–º, â”³ infixr 4 â”“, â”§, â”ƒ Finally, while weâ€™re at it, weâ€™ll also include an operator to clearly indicate to our manager how our valuation will change if we adopt this DSL. (ğŸ“ˆ) = Diagram This lets us do the basics. If we start from regular Arrow syntax: horribleExample01 = partition isUpper >>> reverse *** sort >>> uncurry mappend We can now turn this into: amazingExample01 = (ğŸ“ˆ) (partition isUpper)â”­â–ºreverseâ”“ (ğŸ“ˆ) sort â”¶â–º(uncurry mappend) The trick to decrypting these diagrams is that each line in the source code consists of an arrow where values flow from the left to the right; with possible extra inputs and ouputs in between. These lines are then composed using a few operators that use Below such as â”“ and â”§. To improve readability even further, it should also be possible to add right-to-left and top-to-bottom operators. I asked my manager if they wanted these extra operators but theyâ€™ve been ignoring all my Slack messages since I showed them my original prototype. Probably just busy? Anyway, there are other simple improvements we can make to the visual DSL first. Most Haskellers prefer nicely aligning things over producing working code, so it would be nice if we could draw longer lines like â”�â”�â”�â”�â”³â”�â–º rather than just â”³â–º. And any Haskeller worth their salt will tell you that this is where Template Haskell comes in. Template Haskell gets a bad rep, but thatâ€™s only because it is mostly misused. Originally, it was designed to avoid copying and pasting a lot of code, which is exactly what weâ€™ll do here. Nothing to be grossed out about. extensions :: Maybe Char -> String -> Maybe Char -> [String] extensions mbLeft operator mbRight = [operator] >>= maybe pure goR mbRight >>= maybe pure goL mbLeft where goL l op = [replicate n l ++ op | n <- [1 .. 19]] goR r op = [init op ++ replicate n r ++ [last op] | n <- [1 .. 19]] industryStandardBoilerplate :: Maybe Char -> TH.Name -> Maybe Char -> TH.Q [TH.Dec] industryStandardBoilerplate l name r = do sig <- TH.reify name >>= \case TH.VarI _ sig _ -> pure sig _ -> fail "no info" fixity <- TH.reifyFixity name >>= maybe (fail "no fixity") pure pure [ decl | name' <- fmap TH.mkName extensions l (TH.nameBase name) r
, decl  <-
[ TH.SigD name' sig
, TH.FunD name' [TH.Clause [] (TH.NormalB (TH.VarE name)) []]
, TH.InfixD fixity name'
]
]

We can then invoke this industry standard boilerplate to extend and copy/paste an operator like this:

$(industryStandardBoilerplate (Just 'â”�') '(â”­â–º) (Just 'â”€')) Weâ€™re now equipped to silence even the harshest syntax critics: example02 = (ğŸ“ˆ) (partition isUpper)â”�â”­â”€â–º(reverse)â”�â”“ (ğŸ“ˆ) (sort)â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¶â”�â–º(uncurry mappend) Beautiful! If youâ€™ve ever wondered what people mean when they say functional programs â€œcompose elegantlyâ€�, well, this is what they mean. example03 = (ğŸ“ˆ) (+1)â”�â”³â”�â–º(+1)â”�â”“ (ğŸ“ˆ) (+1)â”�â”�â”�â”�â•†â”�â–ºaddâ”�â”“ (ğŸ“ˆ) addâ”€â”€â”€â”€â”¶â”�â–ºadd where add = uncurry (+) Type inference is excellent and running is easy. In GHCi: *Main> :t example03 example04 :: Diagram '[] '[] (->) Integer Integer *Main> run example03 1 12 Letâ€™s look at a more complicated example. lambda = (ğŸ“ˆ) (id)â”�â”­â”€â–º(subtract 0.5)â”�â”³â”�â”�â”�â”�â”�â”�â”�â”�â–º(< 0)â”�â”�â”�â”�â”�â”�â”�â”�â”�â”�â”“ (ğŸ“ˆ) (subtract 0.5)â”€â”€â”€â”€â”€â”€â”€â•†â”�â–º(add)â”�â–º(abs)â”�â–º(< 0.1)â”€â”¶â”�â”�â”�â”�â”�â”�â”�â–º(and)â”�â”�â”�â”�â”�â”�â”�â”“ (ğŸ“ˆ) (swap)â”�â”­â”€â–º(* pi)â”�â”�â–º(sin)â”³() â”ƒ (ğŸ“ˆ) (* 2)â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¶â”�â–º(sub)â”�â–º(abs)â”�â–º(< 0.2)â”€â”§ (ğŸ“ˆ) (or)â”�â–º(bool bg fg) where add = uncurry (+) sub = uncurry (-) and = uncurry (&&) or = uncurry (||) fg = JP.PixelRGB8 69 58 98 bg = JP.PixelRGB8 255 255 255 This renders everyoneâ€™s favorite greek letter: Amazing! Math! While the example diagrams in this post all use the pure function arrow ->, it is my duty as a Haskeller to note that it is really parametric in f or something. What this means is that thanks to this famous guy called Kleisli, you can immediately start using this with IO in production. Thanks for reading! Update: CarlHedgren pointed out to me that a similar DSL is provided by Control.Arrow.Needle. However, that package uses Template Haskell to just parse the diagram. In this blogpost, the point of the exercise is to bend Haskellâ€™s syntax and type system to achieve the notation. ## Appendix 1: run implementation The implementation of run uses a helper function that lets us convert a diagram back to a normal Arrow that uses HList to pass extra inputs and outputs: fromDiagram :: Arrow f => Diagram ins outs f a b -> f (a, HList ins) (b, HList outs) We can then have a specialized version for when thereâ€™s zero extra inputs and outputs. This great simplifies the type signatures and gives us a â€œnormalâ€� f a b: run :: Arrow f => Diagram '[] '[] f a b -> f a b run d = id &&& (arr (const Nil)) >>> fromDiagram d >>> arr fst The definition for fromDiagram is as follows: fromDiagram (Diagram f) = f *** arr (const Nil) fromDiagram (Then l r) = fromDiagram l >>> first r fromDiagram (Output l) = fromDiagram l >>> arr (\((x, y), things) -> (x, Cons y things)) fromDiagram (Input l) = arr (\(x, Cons a things) -> ((x, things), a)) >>> first (fromDiagram l) >>> arr (\((y, outs), a) -> ((y, a), outs)) fromDiagram (Below l r) = fromDiagram l >>> arr (\(x, outs) -> (hlast (Cons x outs), hinit (Cons x outs))) >>> fromDiagram r ## Appendix 2: some type signatures We wouldnâ€™t want these to get in our way in the middle of the prose, but GHC complains if we donâ€™t put them somewhere. (â”³â–º) :: Arrow f => Diagram ins outs f a b -> f b c -> Diagram ins (b ': outs) f a c (â”­â–º) :: Arrow f => Diagram ins outs f a (b, o) -> f b c -> Diagram ins (o ': outs) f a c (â”¶â–º) :: Diagram ins outs f a b -> f (b, i) c -> Diagram (i ': ins) outs f a c (â•†â–º) :: Arrow f => Diagram ins outs f a b -> f (b, u) c -> Diagram (u ': ins) ((b, u) ': outs) f a c (â”§) :: Diagram ins1 outs1 f a b -> Diagram (Init ((b, u) ': outs1)) outs2 f (Last ((b, u) ': outs1)) c -> Diagram (u ': ins1) outs2 f a c ## Appendix 3: image rendering boilerplate This uses a user-supplied Diagram to render an image. image :: Int -> Int -> Diagram '[] '[] (->) (Double, Double) JP.PixelRGB8 -> JP.Image JP.PixelRGB8 image w h diagram = runST$ do
img <- JP.newMutableImage w h
for_ [0 .. h - 1] $\y -> for_ [0 .. w - 1]$ \x ->
let x' = fromIntegral x / fromIntegral (w - 1)
y' = fromIntegral y / fromIntegral (h - 1) in
JP.writePixel img x y $run diagram (x', y') JP.freezeImage img ## March 11, 2020 ### Philip Wadler # Coronavirus: Why You Must Act Now Unclear on what is happening with Coronavirus or what you should do about it? Tomas Pueyo presents a stunning analysis with lots of charts, a computer model you can use, and some clear and evidence-based conclusions. Please read it and do as he says! ### The haskell-lang.org team # Get base onto stackage.org ## Preface for unaware When you install a particular version of GHC on your machine it comes with a collection of "boot" libraries. What does it mean to be a "boot" library? Quite simply, a library must be used for implementation of GHC and other core components. Two such notable libraries are base and ghc. All the matching package names and their versions for a particular GHC release can be found in this table The fact that a library comes wired-in with GHC means that there is never a need to download sources for the particular version from Hackage or elsewhere. In fact, there is really no need to upload the sources on Hackage even for the purpose of building the Haddock for each individual package, since those are conveniently hosted on haskell.org That being said, Hackage has always been a central place for releasing a Haskell package and historically Hackage trustees would upload the exact version of almost every "boot" package on Hackage. That is why, for example, we have bytestring-0.10.8.2 available on Hackage, despite that it comes with versions of GHC from ghc-8.2.1 to ghc-8.6.5 inclusive. Such an upload makes total sense. Any Haskeller using a core package as a dependency for their own package in a cabal file has a central place to look for available versions and documentation for those versions. In fact some people have become so accustomed to this process that it has been discussed on Haskell-Cafe and a few other places when such package was never uploaded: It's a crisis that the standard library is unavailable on Hackage... ## The problem A bit over a half a year ago ghc-8.8.1 was released, with current latest one being ghc-8.8.3. If you carefully inspect the table of core packages and try to match to available versions on Hackage for those libraries, you will quickly notice that a few of them are missing. I personally don't know the exact reasoning behind this is, but from what I've heard it has something to do with the fact that ghc-8.8.1 now depends on Cabal-3.0. The problem for us is that it also affects Stackage's web interface. Let's see how and why. ## The "how" The "how" is very simple. Until recently, if a package was missing from Hackage, it would not have been listed on Stackage either. This means that if you tried to follow a dependency of any package on base-4.13.0.0 in nightly snapshots starting September of last year you would not find it. As I noted before, not only was base missing, but a few others as well. This problem also depicted itself in a funny looking bug on Stackage. For every package in a list of dependencies the count was always off by at least 1 when compared with the actual links in the list (eg. primtive). This had me puzzled at first. It was later that I realized that base was missing and since almost every every package depends on it, it was counted, but not listed, causing a mismatch. ## The "why" Stackage was structured in such a way that it always used Hackage as true source of available packages, except for the core packages, since those would always come bundled with GHC. For example if you look at the specification of a latest LTS-15.3 snapshot you will not find any of the core packages listed there, for they are decided by the GHC version, which in turn is specified in the snapshot. There are a few stages, tools and actual people involved in making a Stackage snapshot happen. Here are some of the steps in the pipeline: • a curated list of packages that involves package maintainers and sometimes Stackage curators. • a curator tool that is used to construct the actual snapshot, build packages, run test suites and generate Haddocks. • a stackage-server-cron tool that runs at some interval and updates the stackage.org database to reflect all of the above work in a form of package relations and their respective documentation. The last step is of the most interest to us because stackage.org is the place where we had stuff missing. Let's look at some pieces of information the tool needs in order for stackage-server to create a page for a package: • Package name, its version and Pantry keys (cryptographic keys that uniquely identify the contents of source distribution) • Previously generated haddocks and hoogle files for each package • Cabal file, so we can extract useful information about the package, such as description, license, maintainers, module names etc. • Optionally Readme and Changelog files from the source distribution can be served on a package page as well. Information from the latter two bullet points is only available in the source distribution tarballs. Packages that are defined in the snapshot do not pose a problem for us, because by definition their sources are available from Hackage or any of its mirrors. Core packages on the other hand are different, in a sense that they are always available in a build environment, so information about them is present when we build a package: $ stack --resolver lts-15.0 exec -- ghc-pkg describe base
name:                 base
version:              4.13.0.0
visibility:           public
...


The problem is that stackage-server-cron tool is just an executable that is running somewhere in a cloud and it doesn't have such environment. Therefore, until recently, we had no means of getting the cabal files for core packages except by checking on Hackage. With more and more core packages missing from Hackage, especially such critical ones as base and bytestring, we had to come up with solution.

## Solution

Solving this problem should be simple, because all we really need is cabal files. Haddock for missing packages has been generated and was always available, it is the extra little bit of the meta information that was needed in order to generate the appropriate links and the package home page.

The first place to look for cabal files was the GHC git repository. The whole GHC bundle though is quite different from all other packages that we are normally used to:

• Libraries that GHC depends on do not come from Hackage, as we already know, instead they are pinned as git submodules.
• Most of the packages that are defined in the GHC repository do not have cabal files. Instead they have templates that are used for generating cabal files for a particular architecture during the build process.

This means that the repository is not a good source for grabbing cabal files. Building GHC from source is a time consuming process and we don't want to be doing that for every release, just to get cabal files we need. A better alternative is to simply download a distribution package for a common operating system and extract the missing cabal files from there. We used Linux x86_64 for Debian, but the choice of the OS shouldn't really matter, since we only really need high level information from those cabal files.

That was it. The only thing we really needed to do in order to get missing core files on Stackage was to collect all missing cabal files and make them available to the stackage-server-cron tool

## Conclusion

Going back to the origin of Stackage it turns out that there was quite a few of such core packages missing, one most common and most notable one was ghc itself. Only a handful of officially released versions were ever uploaded to Hackage.

From now on we have a special repository commercialhaskell/core-cabal-files where we can place cabal files for missing core packages, which stackage-server-cron tool will pick up automatically. As it usually goes with public repositories anyone from the community is encouraged to submit pull requests, whenever they notice that a core package is not being listed on Stackage for a newly created snapshot.

For the past few weeks the very first such missing core package from Hackage base-4.13.0.0 was being included on Stackage. With recent notable additions being bytestring-0.10.9.0, ghc-8.8.x and Cabal-3.0.1.0.

# Effectful Property Testing

You’re convinced that Property Based Testing is awesome. You’ve read about using PBT to test a screencast editor and can’t wait to do more. But it’s time to write some property tests that integrate with an external system, and suddenly, it’s not so easy.

The fantastic hedgehog library has two “modes” of operation: generating values and making assertions on those values. I wrote the compatibility library hspec-hedgehog to allow using hspec’s nice testing features with hedgehog’s excellent error messages. But then the time came to start writing property tests against a Postgresql database.

At work, we have a lot of complex SQL queries written both in esqueleto and in raw SQL. We’ve decided we want to increase our software quality by writing tests against our database code. While both Haskell and SQL are declarative and sometimes obviously correct, it’s not always the case. Writing property tests would help catch edge cases and prevent bugs from getting to our users.

# IO Tests

It’s considered good practice to model tests in three separate phases:

1. Arrange
2. Act
3. Assert

This works really well with property based testing, especially with hedgehog. We start by generating the data that we need. Then we call some function on it. Finally we assert that it should have some appropriate shape:

spec :: Spec
spec = describe "some property" $do it "works"$ hedgehog $do value <- someGenerator let result = someFunction value result === someExpectedValue  It’s relatively straightforward to call functions in IO. hedgehog provides a function evalIO that lets you run arbitrary IO actions and still receive good error messages. spec :: Spec spec = describe "some IO property"$ do
it "works" $hedgehog$ do
value <- someGenerator
result <- evalIO $someFunction value result === someExpectedValue  For very simple tests like this, this is fine. However, it becomes cumbersome quite quickly when you have a lot of values you want to make assertions on. spec :: Spec spec = describe "some IO property"$ do
it "works" $hedgehog$ do
value0 <- someGenerator0
value1 <- someGenerator1
value2 <- someGenerator2

(a, b, c, d, e) <- evalIO $do prepare value0 prepare value1 prepare value2 a <- someFunction alterState b <- someFunction c <- otherFunction d <- anotherFunction e <- comeOnReally pure (a, b, c, d, e) a === expectedA diff a (<) b c === expectedC d /== anyNonDValue  This pattern becomes unweildy for a few reasons: 1. It’s awkward to have to pure up a tuple of the values you want to assert against. 2. It’s repetitive to declare bindings twice for all the values you want to assert against. 3. Modifying a return means adding or removing items from the tuple, which can possibly be error-prone. Fortunately, we can do better. # pure on pure on pure Instead of returning values to a different scope, and then doing assertions against those values, we will return an action that does assertions, and then call it. The simple case is barely changes: spec :: Spec spec = describe "some simple IO property" it "works"$ hedgehog $do value <- someGenerator assertions <- evalIO$ do
result <- someFunction value
pure $do result === expectedValue assertions  An astute student of monadic patterns might notice that: foo = do result <- thing result  is equivalent to: foo = do join thing  and then simplify: spec :: Spec spec = describe "some simple IO property" it "works"$ hedgehog $do value <- someGenerator join$ evalIO $do result <- someFunction value pure$ do
result === expectedValue


Nice!

Because we’re returning an action of assertions instead of values that will be asserted against, we don’t have to play any weird games with names or scopes. We’ve got all the values we need in scope, and we make assertions, and then we defer returning them. Let’s refactor our more complex example:

spec :: Spec
spec = describe "some IO property" $do it "works"$ hedgehog $do value0 <- someGenerator0 value1 <- someGenerator1 value2 <- someGenerator2 join$ evalIO $do prepare value0 prepare value1 prepare value2 a <- someFunction alterState b <- someFunction c <- otherFunction d <- anotherFunction e <- comeOnReally pure$ do
a === expectedA
diff a (<) b
c === expectedC
d /== anyNonDValue


On top of being more convenient and easy to write, it’s more difficult to do the wrong thing here. You can’t accidentally swap two names in a tuple, because there is no tuple!

# A Nice API

We can write a helper function that does some of the boilerplate for us:

arrange :: PropertyT IO (IO (PropertyT IO a)) -> PropertyT IO a
arrange mkAction = do
action <- mkAction
join (evalIO action)


Since we’re feeling cute, let’s also write some helpers that’ll make this pattern more clear:

act :: IO (PropertyT IO a) -> PropertyT IO (IO (PropertyT IO a))
act = pure

assert :: PropertyT IO a -> IO (PropertyT IO a)
assert = pure


And now our code sample looks quite nice:


spec :: Spec
spec = describe "some IO property" $do it "works"$
arrange $do value0 <- someGenerator0 value1 <- someGenerator1 value2 <- someGenerator2 act$ do
prepare value0
prepare value1
prepare value2

a <- someFunction

alterState

b <- someFunction
c <- otherFunction
d <- anotherFunction
e <- comeOnReally

assert $do a === expectedA diff a (<) b c === expectedC d /== anyNonDValue  # Beyond IO It’s not enough to just do IO. The problem that motivated this research called for persistent and esqueleto tests against a Postgres database. These functions operate in SqlPersistT, and we use database transactions to keep tests fast by rolling back the commit instead of finalizing. Fortunately, we can achieve this by passing an “unlifter”: arrange :: (forall x. m x -> IO x) -> PropertyT IO (m (PropertyT IO a)) -> PropertyT IO a arrange unlift mkAction = do action <- mkAction join (evalIO (unlift action)) act :: m (PropertyT IO a) -> PropertyT IO (m (PropertyT IO a)) act = pure assert :: Applicative m => PropertyT IO a -> m (PropertyT IO a) assert = pure  With these helpers, our database tests look quite neat. spec :: SpecWith TestDb spec = describe "db testing"$ do
it "is neat" $\db -> arrange (runTestDb db)$ do
entity0 <- forAll generateEntity0
entityList <- forAll
$Gen.list (Range.linear 1 100)$ generateEntityChild

act $do insert entity0 before <- someDatabaseFunction insertMany entityList after <- someDatabaseFunction assert$ do
before === 0
diff before (<) after
after === length entityList


# A Real Example

OK, OK, so that last one was too abstract. Let’s say we’re writing a billing system (jeez, i sure do use that example a lot). We keep track of Invoices, which group InvoiceLineItems that contain actual amounts. We have a model for Payments, which record details on a Payment like how it was made, who made it, whether it was successful, etc. A Payment can be applied to many Invoices, so we have a join table InvoicePayment that records the amount of each payment allocated toward an Invoice.

Neat.

Becuase we love Postgresql, quite a bit of our business logic is performed database-side, either via custom SQL functions or esqueleto expressions. One of these functions is invoicePaidTotal, which tells us the total amount paid towards an Invoice.

Here’s the esqueleto code:

invoicePaidTotal
:: SqlExpr (Entity DB.Invoice)
-> SqlExpr (Value (Dollar E2))
invoicePaidTotal i =
fromMaybe_ zeroDollars $subSelect . from$ \(ip InnerJoin p) -> do
on (p ^.DB.PaymentId ==. ip ^.DB.InvoicePaymentPaymentId)
where_ (ip ^.DB.InvoicePaymentInvoiceId ==. i ^.DB.InvoiceId)
where_ (Payment.isSucceeded p)
pure (sumDollarDefaultZero (ip ^.DB.InvoicePaymentTotal))


(Some of these functions are internal to the work codebase, but they do the obvious thing)

This is equivalent to the following SQL:

SELECT
COALESCE(SUM(ip.total), 0)
FROM invoice_payment AS ip
INNER JOIN payment AS p
ON p.id == ip.payment_id
WHERE ip.invoice_id = :invoice_id
AND payment_succeeded(p)


Now, we want to write a test for it. Upon inspection, we’re testing two things: SQL’s SUM function and the payment_succeeded function (which itself is actually another esqueleto expression that would unfold).

So, we can write a property:

• If there are no Payments or InvoicePayments in the database, then this function should return $0.00. • If there are some InvoicePayments in the database, then this function should return the sum of their total fields provided that the associated Payment is successful. Here’s the test code. We’ll start by looking at arrange bit, which creates the database models. arrange (runTestDb db) "invoicePaidTotal"$ do
let invoiceId = InvoiceKey "invoice"
invoice = baseInvoice

payments <-
forAll $Gen.list (Range.linear 1 50)$ do
id <- Gen.id
name <- Gen.faker Faker.name
pure $Entity id basePayment { paymentName = name } invoicePayments <- forAll$
for payments $(Entity paymentId _) -> do amount <- Gen.integral (Range.linear 1 1000) pure$ InvoicePayment invoiceId paymentId amount

act $do ... snip ...  Values like baseInvoice, basePayment are useful as test fixtures. I’ve generally found that writing generators for models isn’t nearly as useful as generating modifications to models that alter what you care about. This doesn’t catch as many potential edge case bugs, so it has it’s downsides, but if the client name being “Foobar” instead of “AsdfQuux” affects payment totals, then something is deeply weird. Alright, let’s act! I usually like to define the function under test as subject, along with whatever scaffolding needs to happen to make it easy to call. In this case, I want to test an esqueleto SqlExpr, which means I need convert it into a query and run it. Calling it subject is just an aesthetic thing.  act$ do
let subject =
select $from$ \i -> do
where_ $i ^. InvoiceId ==. val invoiceId pure$ invoicePaidTotal i


I call head fearlessly here because I don’t care about runtime errors in test suites. YOLO.

Next, we’re going to insert our invoice, and call subject to get the paid total.

        insertKey invoiceId invoice

beforePayments <- subject


Then we’ll mutate the state of the database by inserting all the invoice payments and invoices.

        insertEntityMany payments
insertMany invoicePayments

afterPayments <- subject


And that’s all we need to start writing some assertions.

        assert $do beforePayments === 0 afterPayments === do map invoicePaymentTotal$ filter isSuccessfulPayment
$invoicePayments  isSuccessfulPayment is a Haskell function that mirrors the logic in the SQL. If this test passes, then we know that the logic is all set. Next up, we might want to write an equivalence test for the Haskell isSuccessfulPayment and the esqueleto/SQL Payment.isSuccessful. This would look something like: arrange (runTestDb db)$ do
Entity paymentId payment <- forAll Payment.gen
act $do insertKey paymentId payment dbPaymentSuccessful <- fmap (unValue . head)$
select $from$ \p -> do
where_ $p ^. PaymentId ==. val paymentId pure (Payment.isSuccessful p) assert$ do
dbPaymentSuccessful
===
paymentIsSuccessful payment



# On Naming Things

No, I’m not going to talk about that kind of naming things. This is about actually giving names to things!

The most general types for arrange, act, and assert are:

act, assert :: Applicative f => a -> f a
act = pure
assert = pure

arrange
=> (forall x. n x -> m x)
-> m (n (m a)) -> m a
arrange transform mkAction = do
action <- mkAction
join (transform action)


These are pretty ordinary and unassuming functions. They’re so general. It can be hard to see all the ways they can be useful.

Likewise, if we only ever write the direct functions, then it can be difficult to capture the pattern and make it obvious in our code.

Giving a thing a name makes it real in some sense. In the Haskell sense, it becomes a value you can link to, provide Haddocks for, and show examples on. In our work codebase, the equivalent functions to the arrange, act, and assert defined here have nearly 100 lines of documentation and examples, as well as more specified types that can help guide you to the correct implementation.

Sometimes designing a library is all about narrowing the potential space of things that a user can do with your code.

## March 09, 2020

Last week, we announced our Practical Haskell course. Enrollments are still open, but not for much longer! They will close at midnight Pacific time on Wednesday, March 11th, only a couple days from now!

I've always hoped to provide content that would help people make the jump from beginners to seasoned Haskell developers. I want to show that Haskell can be useful for "Real World" applications. Those are the main goals of this course. So in this article, I wanted to share some of the mistakes I made when I was trying to make that jump. These are what motivated me to make this course, so I hope you can learn from them.

## Package Management is Key

My Haskell career started with a side project, one you can still see on Github. There were some cool things about the project, but my process had several flaws. The first one was that I had no idea how to organize a Haskell project.

My early work involved writing all my code in .hs source files and running manual tests with runghc. Installing dependencies was a mess (I put everything in the global package database). I eventually learned to use Cabal, but without sandboxing. Dependency hell ensued. It was only after months of working through that process that I learned about Stack. Stack made everything easier, but I could have used it from the start!

Don't repeat my mistake! Learn how to use Stack, or just Cabal, or even Nix! This will solve so many of your early problems. It will also streamline the rest of your development process. Speaking of...

## Test First, Integrate Completely

When it comes to making a project, the first question you should ask is, "How will my customer use this?" When it comes to writing code within that project, you should always then ask, "How will I know this code works?"

These two questions will guide your development and help avoid unnecessary rework. It's a natural tendency of developers that we want to jump in on the "meat" of the problem. It's exactly the mistake I made on that first project. I just wanted to write Haskell! I didn't want to worry about scripting or package non-sense. But these issues will ultimately get in the way of what you really want to do. So it's worth putting in the effort to overcome them.

The first step of the project as a whole should be to build out your end-to-end pipeline. That is, how will you put this code out there on the web? How will someone end up using your code? There will often be tedious scripting involved, and dealing with services (CI, AWS, etc.). But once that work is out of the way, you can make real progress.

Then when developing a particular component, always know how you'll test it. Most often, this will be through unit testing. But sometimes you'll find it's more complicated than that. Nothing's more frustrating than thinking you're done coding and finding problems later. So it's important to take the time to learn about the frameworks that let you test things with ease. Keep practicing over and over again until testing is second nature.

## Start Simple

Another important thing when it comes to the learning process is knowing how to start small. I learned this over the course of my machine learning series last fall. My methods were often so ineffective that I didn't know if the algorithm I was trying to implement worked at all. But the problem I was trying to solve was too difficult! I've found more success in machine learning by starting with simpler problems. This way, you'll know the general approach works, and you can scale up accordingly.

This also makes it much easier to follow the advice above! If your system is large and complicated, the scripting and running process will be harder. You'll have to spend more time getting everything up and running. For a smaller project, this is not so difficult. So you'll get valuable practice at a smaller scale. This will make bigger projects smoother once you get there.

## Use Both Documentation and Examples

None of us were born knowing how to write Haskell. The first time you use a library, you won't know the best practices. The documentation can help you. It'll list everything you need, but often a lot more. It can be hard to know what's necessary and what's not.

So another great thing to do when starting out is to find a project that has used the library before. You need to establish some baseline of "something that works". This way, you'll have a more solid foundation to build on. You'll have specific examples to work from, which will help build your "end-to-end experience".

In my first project, I used the Parsec library without using any examples! My code was sloppy and repetitive. There were many shortcuts I didn't know about hiding in the docs. And I could have avoided that if I had first looked for a project that also used the library. Then I could have started from there and built my knowledge.

Documentation and examples work in tandem with each other. If you use the docs without examples, you'll miss a lot of shortcuts and practical uses. If you use examples without the docs, you'll miss the broader picture of what else you can do! So both are necessary to your development as a programmer.

## Conclusion

And if you're not as confident in your skills yet, you can also check out our Beginners course! It requires no experience and will walk you through the basics!

# How to get a Haskell job

Summary: There are four things I recommend to get a Haskell job. Applies to most technologies.

I was recently emailed by someone who asked for advice on what they could do to get a Haskell job in the future. Rather than share my reply only with them, I thought I'd cc the world via my blog. I'd give the same advice if asked about how to get a job focusing on any technology, just changing the examples. While the pieces of advice explain how they can be used to get a job, I believe they are all useful in their own right too!

The most important thing to get a job in Haskell, is being fluent in Haskell, which can only be done by writing real Haskell programs/libraries. Solving small exercises or challenges will help a bit, but there are some limitations/solutions/approaches that you only learn when trying to do something for real. For beginners at Haskell, I recommend taking whatever you are interested in outside of Haskell, and writing a library about that. It can be image codecs, statistics, lasers, poker - whatever. There's probably something you know a lot about, which most people don't, which gives you a good starting point. That library will convince future employers you know how to write good code, and in the best case, you'll find an employer ends up using your library. Hiring people whose work you already use is an easy decision. When I've hired programmers in the past, I treat their CV as a pointer to their GitHub account.

In most cities there are a bunch of either Haskell of functional programming meetups. Usually Meetup will have them, but a Google search can find them too. If there's nothing near you, try a more global event like ZuriHac or Haskell Implementors Workshop. These events give you an idea how other Haskell programmers think, and there are always people who are currently employed to write Haskell, who might offer you a job. Some of these Haskellers will even become friends who you collaborate with over decades.

Write words

As you are learning, write down what you are learning, what you are thinking, the hurdles you overcome and the thoughts you have. In many cases, no one will listen, but the mere act of writing down the words serves as a record of what you are learning. In some cases, you'll find an audience, and that audience will give you credibility (which isn't real credibility, but the world is a funny place) and contacts which can be useful in getting you a job. When I started, I wrote on my blog, but now Twitter or Medium might be better. Maybe it should be Twitch streams or SnapChat messages - I've no idea. Do whatever works for you. When I got my first Haskell job, I had colleagues who didn't know who I was, but had already been reading my blog.

# Optimizing a maze with graph theory, genetic algorithms, and Haskell

Lately, I’ve been working on a side project that became a fun exercise in both graph theory and genetic algorithms. This is the story of that experience.

### Beginnings

I recently took a break from my job to recover my excitement and sense of wonder about the world. During this break, I ended up building a board game. In this game, players navigate a maze built out of maze pieces on hexagon tiles, so that when these tiles are shuffled and connected at random, they build a complete maze, on a grid like this:

I wasn’t at all sure that I could build a very good maze by the random placement of tiles. Still, I figured that I could wing it and see what happened, so I did.

### Attempt #1: Doing the Simplest Thing

Mazes are almost the ideal application of graph theory. A graph (and here I always mean an undirected graph) is a bunch of vertices connected by edges. A maze, on the other hand, is a bunch of locations connected by paths. Same thing, different words!

In this case, my original idea was that the hexagon tiles were the locations (i.e., vertices), and each of the six sides of the tile would either be open or blocked, potentially making a path to the adjacent tile. The question, then, was which of these edges should be blocked. I was guided by two concerns:

1. It should be possible to solve the maze. That is, there should be (at least with high probability) an open path from the start to the finish.
2. It should not be trivial to solve the maze. That is, it should not be the case that moving in any direction at all gets you from the start to the finish.

A key insight here is that adding an edge in a graph always does one of two things: it either connects two previously unconnected components of the graph, or it creates a cycle (a path from a vertex back to itself without repeating an edge) by connecting two previously connected vertices in a second way. The two concerns, then, say that we want to do the first if at all possible, but avoid the second. Unfortunately, since our edges are pointing in random directions, it’s not clear how we can distinguish between the two!

But, at the very least, we can try to get about the right number of edges in our graph. We want the graph to be acyclic (i.e., not contain any cycles), but connected. Such a graph is called a tree. We’ll use the following easy fact several times:

Proposition: In any acyclic graph, the number of edges is always equal to the number of vertices minus the number of connected components. (Proof: By induction on the number of edges: if there are no edges, then each vertex is its own connected component. If there are edges, removing any edge keeps the graph acyclic, so the equality holds, but re-adding that edge adds one edge and removes one connected component, preserving the equality.)

Corollary: A tree has one fewer edge than it has vertices. (Proof: the entire tree is one connected component. Apply the proposition above.)

So, I thought to myself, I have twenty hex tiles, because that’s how many blank tiles came in the box. If I treat each of them as a vertex, then I need 19 connections between them.

There’s one further complication. In order for a path to be open, it has to not be blocked on either side, and the outside boundary of the maze is always blocked. So not every open edge of a tile adds a vertex to the graph. This is easily solved with a slight change of perspective. Instead of counting the number of edges, one could count the desired probability of an edge being open.

In my chosen configuration of tiles, there were 43 internal borders between tiles, not counting the outer boundary of the whole maze. So I wanted about a 19/43 (or about 43%) probability of an edge on each of these boundaries. Because both tiles need an open side to make that edge, if each edge is chosen independently, the probability of a tile having an open edge should be the square root of that, or about 66%. Since each of the 20 tiles has 6 edges on each of 2 sides, there are 240 edges, and a random 160 of them (that’s 240 * sqrt(19/43)) should be open, and the remaining 80 should be blocked.

You can see a simulation of this here. This simple simulation doesn’t choose tiles. Unfortunately, this experiment failed. if you try this several times, you’ll notice:

1. The mazes are just too simple. Twenty spaces in the maze are just not enough.
2. It’s far too frequent that the open edges are in the wrong places, and you cannot get from start to finish.

Now, it’s not imperative that a maze be acyclic, so I could (and, in practice, would) increase the probability of an open edge to solve the second problem… but that would just make the first problem even worse. There is no choice of probability that fixes both problems at once.

### Attempt #2: More Interesting Tiles

To be honest, I expected this first attempt to fail. I never even drew the tiles, but just simulated enough to be confident that it would not work. Clearly, I needed more structure within the map tiles.

One thing that would have made the result look more compelling would be to draw interesting (but static) mazes onto each tile, which are all connected themselves, and have exits in all directions with the appropriate probabilities. This is all smoke and mirrors, though. Ultimately, it reduces to the same system as above. With a sufficiently increased probability of an open edge, this would satisfy my requirements.

But it feels like cheating. I want the high-level structure of the maze to be interesting in its own right. And no matter how clever I get with the tiles, the previous section showed that’s not possible in this model.

The remaining option, then, is to consider that each tile, instead of being one location with a number of exits, actually contains multiple locations, with varied connectivity. Since I don’t want to add artificial complexity inside the tiles, they will simply connect the edges as desired. Some of the tiles now look like this:

At first glance, the analysis for connecting the maze becomes trickier, because the number of vertices in the graph changes! Three of the tiles above have two locations, but the last one has only one. There are even tiles with three different paths.

But never fear: another change of perspective simplifies the matter. Instead of considering the graph where each tile or path is a vertex, we will consider the dual graph, where each border is a vertex, like this:

Now we once again have a fixed number of vertices (77 of them), and tiles that determine the edges between those vertices. To connect this graph, then, requires at least 76 edges spread across 20 tiles, or about 3.8 edges per tile. Maybe even a few more, since the resulting graph is sure to contain some cycles, after all, so let’s call it 4 per tile.

Here’s the mistake I made when I first considered this graph. The last tile in the set of four examples above connects five different vertices together, so that it’s possible to pass directly between any pair of them. Naively, you might think that corresponds to 10 edges: one for each pair of vertices that are connected. That fit my intuition that connecting five out of the six sides should be way above average, as far as connections on a tile. I thought this, and I worked out a full set of tiles on this assumption before realizing that the resulting mazes were not nearly connected enough.

Here’s the problem: those 10 edges will contain a rather large number of cycles. The requirement for at least 76 edges was intended for an acyclic graph. We will inevitably end up with some cycles between tiles, but we should at least avoid counting cycles within a tile. To connect those five vertices with an acyclic graph requires four edges rather than ten. It doesn’t matter to us which four edges those are; they only exist in theory, and don’t change how the tile is drawn. But it’s important to only count them as four.

Now, 76 edges feels like an exorbitant number of edges! If we must average connecting nearly five out of the six sides of each tile, it seems it will be possible to move almost anywhere. But on further reflection, this isn’t unreasonable. Nearly half of those vertices are on the outside boundary of the maze! All it takes is one blocked path in an unfortunate place to isolate one of those vertices. Even a one-in-six chance will isolate about six of them. Those six edges that are not being used to connect the graph will then go to adding excess paths in the remaining pieces!

The fact that vertices on the edges of the graph are so likely to be isolated is a problem. I decided to mitigate that problem creatively, by adding some of those 76 edges not on the tiles, but on the board. Something like this:

With these 14 edges on the exterior of the maze (remember that connecting three vertices only counts as two edges), we need only 64 on the interior, which is barely more than 3 per tile. Not only that, but the tiles on the edge are considerably less likely to be isolated in the resulting maze.

At this point, I drew a set of tiles, iterated on them for a bit by laying out mazes and getting a sense for when they were too easy, too hard, etc. I ended up with a mechanic that I was mostly happy with. Here is the actual game board with a random maze laid out. The layout is a little different from the examples above because of the need to work in other game play elements, but the idea is essentially the same.

If you look closely, you’ll see that the resulting maze isn’t entirely connected: there are two short disconnected sections on the bottom-left and bottom-right tiles. It definitely contains cycles, both because the disconnected components free up edges that cause cycles, and because I’ve included more than the minimum number of edges needed to connect the maze anyway.

But it is non-trivial, visually interesting, and generally in pretty good shape. It appears fairly easy when all laid out like this, but in the actual game, players only uncover tiles when they reach them, and the experience is definitely much like navigating a maze, where you’re not sure where a trail will lead until you follow it. All in all, it’s a success.

### Attempt #3: Using Genetic Algorithms

I wasn’t unhappy with this attempt, but I suspected that I could do better with some computer simulation. I little bit of poking around turned up a Haskell library called moo that provides the building blocks for genetic algorithms.

The idea behind genetic algorithms is simple: you begin with an initial population of answers, which are sometimes randomly generated, and evaluate them according to a fitness function. You take the best answers in that set, and then expand it by generating random mutations and crosses of that population. This provides a new population, which is evaluated, mutated, cross-bred, etc.

import Algebra.Graph.AdjacencyMap.Algorithm (scc)import Algebra.Graph.ToGraphimport qualified Algebra.Graph.Undirected as Uimport Control.Monadimport Data.Containers.ListUtilsimport Data.Function (on)import Data.Listimport Moo.GeneticAlgorithm.Binaryimport System.IO.Unsafeimport System.Randomimport System.Random.Shuffle

And some types:

data Direction = N | NE | SE | S | SW | NW  deriving (Show, Eq, Ord, Enum)
type Trail = [Direction]
data Tile = Tile [Trail] [Trail]  deriving (Show, Eq)

A Direction is one of the six cardinal directions in a hex world, arranged in clockwise order. A Trail is a single trail from one of the tile designs. It’s represented as a list of directions from which one can exit the tile by that trail. Everything but that is just visual design. And finally, a Tile has a top and bottom side, and each of those has some list of Trails.

The moo library requires some way to encode or decode values into bit strings, which play the role of DNA in the genetic algorithm. The framework will make somewhat arbitrary modifications to these bit strings, and you’re expected to be able to read these modified strings and get reasonable values. In fact, it’s expected that small modifications to a bit string will yield similarly small changes to the encoded value.

Here’s what I did. Each side of a tile is encoded into six numbers, each of which identifies the trail connected to a direction of the tile. If a tile does not have a trail leaving in some direction, then that direction will have a unique trail number, so essentially it becomes a dead end. Assuming we don’t want a trail with only a single non-branching path (and we don’t!), it suffices to leave room for four trail numbers, which requires only two bits in the string. Then a side needs 12 bits, a two-sided tile needs 24 bits, and a full set of 20 tiles needs 480 bits.

Here’s the code to encode a stack of tiles this way starting from the earlier representation:

encodeTile :: Tile -> [Bool]encodeTile (Tile a b) =  concat    [ encodeBinary (0 :: Int, 3) (trailNum side dir)      | side <- [a, b],        dir <- [N .. NW]    ]  where    trailNum trails d =      case [i | (t, i) <- zip trails [0 .. 3], d elem t] of        [i] -> i        _ -> error "Duplicate or missing dir on tile"
encodeTiles :: [Tile] -> [Bool]encodeTiles = concat . map encodeTile

The encodeBinary function is part of moo, and just encodes a binary number into a list of bools, given a range. The rest of this just matches trails with trail numbers and concatenates their encodings.

Decoding is a little more complex, mainly because we want to do some normalization that will come in handy down the road. A helper function first:

clockwise :: Direction -> Directionclockwise N = NEclockwise NE = SEclockwise SE = Sclockwise S = SWclockwise SW = NWclockwise NW = N

Now the decoding:

decodeTile :: [Bool] -> TiledecodeTile bs = Tile (postproc a) (postproc b)  where    trailNums =      map        (decodeBinary (0 :: Int, 3))        (splitEvery trailSize bs)    trail nums n = [d | (d, i) <- zip [N .. NW] nums, i == n]    a = map (trail (take 6 trailNums)) [0 .. 3]    b = map (trail (drop 6 trailNums)) [0 .. 3]
postproc :: [Trail] -> [Trail]postproc = normalize . filter (not . null)  where    normalize trails =      minimum        [ simplify ts          | ts <- take 6 (iterate (map (map clockwise)) trails)        ]    simplify = sort . map sort
decodeTiles :: [Bool] -> [Tile]decodeTiles = map decodeTile . splitEvery tileSize
trailSize :: InttrailSize = bitsNeeded (0 :: Int, 3)
tileSize :: InttileSize = 12 * trailSize

Again, some of the functions (such as decodeBinary, bitsNeeded, and splitEvery) are utility functions provided by moo. The postproc function applies a bit of normalization to the tile design. Since reordering of the exits of a trail, the trails, and the rotation of the tile do not change its meaning, we simply make a consistent choice here, so that equal tiles will compare equal.

The next thing we need is a way to score a set of tiles, so the best can be chosen. From the beginning, we know that the scoring will involve shuffling the tiles, which actually includes three kinds of randomization:

• Moving the tiles to different places on the board.
• Rotating the tiles to a random orientation.
• Flipping the tiles so either side is equally likely to be visible.

Here’s that code:

rotateTile :: Int -> Tile -> TilerotateTile n  | n > 0 = rotateTile (n - 1) . rot  | n < 0 = rotateTile (n + 6)  | otherwise = id  where    rot (Tile top bottom) =      Tile (map (map clockwise) top) bottom
flipTile :: Tile -> TileflipTile (Tile top bottom) = Tile bottom top
randomizeTile :: RandomGen g => g -> Tile -> (g, Tile)randomizeTile g t  | flipped = (g3, rotateTile rots (flipTile t))  | otherwise = (g3, rotateTile rots t)  where    (flipped, g2) = random g    (rots, g3) = randomR (0, 5) g2
shuffleTiles :: RandomGen g => g -> [Tile] -> (g, [Tile])shuffleTiles g tiles = (g3, result)  where    (g1, g2) = split g    shuffledTiles = shuffle' tiles (length tiles) g1    (g3, result) = mapAccumL randomizeTile g2 shuffledTiles

In order to score the random arrangements of tiles, it’s useful to have a graph library. In this case, I chose Alga for the task, mainly because it allows for graphs with an arbitrary node type (which I want here), it seems to be garnering some excitement, and I haven’t had a chance to play with it yet. Alga represents undirected graphs with a different Graph type (annoyingly called the same thing) in a subpackage, hence the qualified import.

In order to share actual code, I’m switching to now work with the real game board, from the photo above. There are certain fixed locations that I care about because players will need to reach them in the game: the witch’s hut, the wishing well, the monster’s lair, the orchard, the spring, and the exit. These get their own named nodes. The tiles are named “1” through “20”. And finally, because each tile can have multiple trails, a node consists of a name and a trail number (which is always 0 for the built-in locations). Here’s the code to build a graph from a list of tiles:

topSide :: Tile -> [[Direction]]topSide (Tile top _) = top
tileGraph :: [Tile] -> U.Graph (String, Int)tileGraph tiles =  U.edges $[ ((show a, trailNum a dira), (show b, trailNum b dirb)) | (a, dira, b, dirb) <- connections ] ++ [ (("Well", 0), (show c, trailNum c dir)) | (c, dir) <- [ (6, SE), (7, NE), (10, S), (11, N), (14, SW), (15, NW) ] ] ++ [ (("Hut", 0), (show c, trailNum c dir)) | (c, dir) <- [(1, NE), (5, N), (9, NW)] ] ++ [ (("Spring", 0), (show c, trailNum c dir)) | (c, dir) <- [(4, S), (8, SW)] ] ++ [ (("Orchard", 0), (show c, trailNum c dir)) | (c, dir) <- [(13, NE), (17, N)] ] ++ [ (("Lair", 0), (show c, trailNum c dir)) | (c, dir) <- [(12, SE), (16, S), (20, SW)] ] ++ [ (("Exit", 0), (show c, trailNum c dir)) | (c, dir) <- [(19, SE), (20, NE), (20, SE)] ] where trailNum n dir = head [ i | (exits, i) <- zip (topSide (tiles !! (n - 1))) [0 ..], dir elem exits ] connections = [ (1, S, 2, N), (1, SW, 2, NW), (1, SE, 5, NW), (2, NE, 5, SW), (2, SE, 6, NW), (2, S, 3, N), (2, SW, 3, NW), (3, NE, 6, SW), (3, SE, 7, NW), (3, S, 4, N), (3, SW, 4, NW), (4, NE, 7, SW), (4, SE, 8, NW), (5, NE, 9, SW), (5, SE, 10, NW), (5, S, 6, N), (6, NE, 10, SW), (6, S, 7, N), (7, SE, 11, NW), (7, S, 8, N), (8, NE, 11, SW), (8, SE, 12, NW), (8, S, 12, SW), (9, NE, 13, N), (9, SE, 13, NW), (9, S, 10, N), (10, NE, 13, SW), (10, SE, 14, NW), (11, NE, 15, SW), (11, SE, 16, NW), (11, S, 12, N), (12, NE, 16, SW), (13, SE, 17, NW), (13, S, 14, N), (14, NE, 17, SW), (14, SE, 18, NW), (14, S, 15, N), (15, NE, 18, SW), (15, SE, 19, NW), (15, S, 16, N), (16, NE, 19, SW), (16, SE, 20, NW), (17, SE, 18, NE), (17, S, 18, N), (18, SE, 19, NE), (18, S, 19, N), (19, S, 20, N) ] Tedious, but it works! I can now score one of these graphs. There are two kinds of things I care about: the probability of being able to get between any two of the built-in locations, and the number of “excess” edges that create multiple paths. hasPath :: String -> String -> U.Graph (String, Int) -> BoolhasPath a b g = (b, 0) elem reachable (a, 0 :: Int) (U.fromUndirected g) extraEdges :: Ord a => U.Graph a -> IntextraEdges g = edges - (vertices - components) where vertices = U.vertexCount g edges = U.edgeCount g components = vertexCount (scc (toAdjacencyMap (U.fromUndirected g))) scoreGraph :: U.Graph (String, Int) -> [Double]scoreGraph g = [ -0.1 * fromIntegral (extraEdges g), if hasPath "Hut" "Well" g then 1.0 else 0.0, if hasPath "Hut" "Spring" g then 1.0 else 0.0, if hasPath "Hut" "Lair" g then 1.0 else 0.0, if hasPath "Hut" "Orchard" g then 1.0 else 0.0, if hasPath "Hut" "Exit" g then 1.0 else 0.0, if hasPath "Well" "Spring" g then 1.0 else 0.0, if hasPath "Well" "Lair" g then 1.0 else 0.0, if hasPath "Well" "Orchard" g then 1.0 else 0.0, if hasPath "Well" "Exit" g then 1.0 else 0.0, if hasPath "Spring" "Lair" g then 1.0 else 0.0, if hasPath "Spring" "Orchard" g then 1.0 else 0.0, if hasPath "Spring" "Exit" g then 1.0 else 0.0, if hasPath "Lair" "Orchard" g then 1.0 else 0.0, if hasPath "Lair" "Exit" g then 1.0 else 0.0, if hasPath "Orchard" "Exit" g then 1.0 else 0.0 ] The result of scoring is a list of individual scores, which will be added together to determine the overall fitness. I’ve added a low negative weight to the extra edges, in order to express the fact that they are bad, but far less so than not being able to reach an important game location. (Unreachability isn’t fatal, though, since the game also has mechanisms by which the maze changes over time… that’s my backup plan.) Now, I just need to score the tiles themselves by generating a bunch of random game board graphs, and averaging the scores for those graphs. When I did so, though, I found that there were other things that went wrong with the optimized tiles: • The algorithm reused the same designs over and over again, making the game less random. To fix this, I added a small cost associated with the sum of the squares of the number of each unique tile design. This is why it was convenient to normalize the tiles, so I could find duplicates. Some duplicates are okay, but by the time we get to four or five of the same design, adding more duplicates becomes very costly. • The algorithm generated a lot of tiles that need bridges. Like salt, bridges make the map more interesting in small quantities, such as the five bridges in the photo above, out of 20 tiles in all. But I was getting tile sets that needed bridges on nearly every tile, and sometimes even stacks of them! To fix this, I added a small cost for each bridge in the final tile set. • The tiles included a tile with all six directions connected to each other. For aesthetic reasons, I wanted the wishing well to be the only location on the map with that level of connectivity. So I added a large negative cost associated with using that specific tile design. Here’s the resulting code. needsBridge :: [Trail] -> BoolneedsBridge trails = or [conflicts a b | a <- trails, b <- trails, a /= b] where conflicts t1 t2 = or [d > minimum t1 && d < maximum t1 | d <- t2] && or [d > minimum t2 && d < maximum t2 | d <- t1] numBridges :: [Tile] -> IntnumBridges tiles = length [ () | Tile a _ <- tiles ++ map flipTile tiles, needsBridge a ] dupScore :: [Tile] -> IntdupScore tiles = sum (map ((^ 2) . length) (group (sort sides))) where sides = map topSide tiles ++ map (topSide . flipTile) tiles numFullyConnected :: [Tile] -> IntnumFullyConnected tiles = length [() | Tile a b <- tiles, length a == 1 || length b == 1] scoreTiles :: RandomGen g => g -> Int -> [Tile] -> [Double]scoreTiles g n tiles = [ -0.05 * fromIntegral (numBridges tiles), -0.02 * fromIntegral (dupScore tiles), -1 * fromIntegral (numFullyConnected tiles) ] ++ map (/ fromIntegral n) graphScores where (graphScores, _) = foldl' next (repeat 0, g) (replicate n tiles) next (soFar, g1) tiles = let (g2, shuffled) = shuffleTiles g1 tiles in (zipWith (+) soFar (scoreGraph (tileGraph shuffled)), g2) That’s basically all the pieces. From here, I simply ask the moo library to run the optimization. I was too lazy to figure out how to pipe random number generation through moo, so I cheated with an unsafePerformIO there. I realize that means I’m kicked out of the Haskell community, but I’ll take my chances that no one reads this far down. Here’s the rest of the boilerplate. Much of it is copied without understanding from examples distributed with moo. Perhaps there’s some hyper-parameter tuning that would improve things, but I’m happy with what I got! main :: IO ()main = do void$ runIO initialize $loopIO [DoEvery 10 logStats, TimeLimit (12 * 3600)] (Generations maxBound) nextGen initialize = return (replicate popsize (encodeTiles originalTiles)) logStats n pop = do let best = decodeTiles$ head          $map takeGenome$ bestFirst Maximizing pop  putStrLn $show n ++ ":" mapM_ print best g <- newStdGen print$ scoreTiles g 500 best
nextGen =  nextGeneration    Maximizing    objective    select    elitesize    (onePointCrossover 0.5)    (pointMutate 0.5)
select = tournamentSelect Maximizing 2 (popsize - elitesize)
popsize = 20elitesize = 5
objective :: Genome Bool -> Doubleobjective gen = unsafePerformIO $do g <- newStdGen return$ sum $scoreTiles g 500$ decodeTiles gen

### The Final Result

Using moo, I was able to maintain or improve on all components of the scoring.

For the tiles I drew in attempt #2:

• 13 bridges were needed across the 40 tile designs.
• The duplicate score (sum of squares of the count of each unique design) was 106, indicating several designs that were duplicated four and five times.
• There was an average of 9.1 extra edges creating cycles in the maze.
• Probabilities of a path to key locations was about 93%.

For the newly optimized tiles:

• There were only 11 bridges needed, saving two bridges over the original design.
• The duplicate score was 72, indicating many fewer copies of the same design, and never more than three copies of a design.
• There was an average of 8.4 extra edges creating cycles in the maze, nearly one less than in the original.
• Probabilities of a path to key locations was about 92%, which is essentially the same as the original.

For the most part, though, I think I learned that the tiles I’d previously devised were pretty well chosen, but I did get an incremental improvement in making the maze more challenging without compromising much on the possibility of success.

The art for the new tile set is not yet as pretty as the old one, but here’s a maze laid out with the new tiles.

So what about the rest of the game? It’s been a lot of work! The maze generation was actually a pretty small part. There are a bunch of details around character-building (each player chooses one of six characters with a unique personality, story, and abilities), other mechanics (there’s a monster who moves randomly around the maze, quests that take characters to various locations to collect items, a magical fog that hides and changes parts of the maze as you play, etc.), and artwork.

Unfortunately, I return to my full-time job tomorrow and may not have the free time to pursue frivolous goals. But who knows… perhaps I’ll find the time, hit Kickstarter, and finish the job. In any case, I’ve learned some things and had a good time, and I hope you enjoyed the story.

Optimizing a maze with graph theory, genetic algorithms, and Haskell was originally published in Analytics Vidhya on Medium, where people are continuing the conversation by highlighting and responding to this story.

# Code is Engineering, Types are Science

Juan Raphael Diaz Simões

Programming is a diverse activity that requires reasoning in many different ways. Sometimes one has to think like an engineer—find the best solution for a problem under multiple constraints. Other times, one has to think like a scientist—observe the data points you have in order to establish general rules that help you attain your goals. These patterns of thinking have very different natures.

In this blog post, I will explain these patterns through the theory of reasoning of Charles Sanders Peirce. Peirce divides reasoning into three complementary processes[1]: deduction, abduction and induction. In the following sections we will go through these logical processes and see how they relate to software.

But before starting, I want to make explicit the underlying subject of this post: plausible reasoning. Plausible reasoning does not imply certainty or truth—it better reflects the concept of an educated guess. At minimum, these guesses must not be contradicted by the information we have at hand. At best, with more information, we can choose the guess with the most chance of being true. With that out of the way, let's dive into our definitions.

## Deduction

Deduction is the process of reasoning that we acknowledge the most. It is what we learn in mathematics, and also what we think about when talking about logic. The basic schema is the following: if we know a fact A, and we know that A implies B, then we know the fact B (modus ponens). We will represent this process with the following diagram:

In philosophy books, it is very common to identify the sides of this triangle with the following phrases:

• Socrates is human (left).
• Every human is mortal (right).
• Therefore Socrates is mortal (bottom).

which can be useful when trying to identify the reasoning patterns we will see next.

The practical utility of deduction is that it is able to reduce the scope of a problem: if we want to achieve B, and we know that A implies B, we can change our goal to A if we think that is useful.

One particularity of deduction is that the result of a deduction is the certainty of a fact: if we achieve A, we are certain to achieve B. As we will see, this is not the case for the other reasoning processes.

In Haskell programming, for example, deduction is omnipresent. Every time you apply a function of type a -> b to a value of type a in order to produce a value of type b, you apply deductive reasoning.

## Abduction

A client sends you a bug report describing some anomalous behavior in the software you work on. You run it locally, and observe the same behavior. After some changes in the code you do not observe the offending behavior anymore, and declare the bug as solved.

The first thing to notice is that the reasoning above is not logically correct. Indeed, we can simplify it as follows:

• I don't see the behavior anymore.
• If the bug is solved, I should not see the behavior.
• Therefore, I solved the bug.

It is possible that the anomalous behavior was the result of two interacting parts, and your manipulation of one of them made the underlying cause of the bug, present in the other component, not visible anymore. However this is plausible reasoning—given the constraints you have (the observation or not of bugs locally), you managed to propose a solution that is coherent, non-contradicting, that has a chance of being correct. And as developers, we know that very often, this works.

The mechanism of abduction is the following: if we know a fact B, and we know that A implies B, then we abduce A as a plausible thing. This can be represented in the following diagram:

Abduction is the process that we identify with writing code and engineering, since it is about solving a problem under constraints. This is not the only component of engineering, but it is one of its defining aspects. In the case of the diagram above, for example, the constraint is knowing that A implies B.

We can now connect abduction with writing code in a typed language. What happens when we try to make an action plan based on abductive reasoning? If we take A to correspond to "code works" and B to correspond to "code typechecks", we can act using abduction as follows:

• I want to write working code.
• My only information is that working code must typecheck.
• Therefore, I try to write code that typechecks so that it can work.

This reasoning can be successful or unsuccessful depending on the context. It has almost no chance of working in the C language, but has a very good chance of working if you doing something simple in Idris. It can also describe the experience of a beginner in a Haskell codebase, and the expression "just follow the types". This will not necessarily lead to working code, but will allow the user to run the code and obtain more information about it.

In summary, abduction corresponds to using all the information one has in order to obtain a reasonable, or if the context permits, the best solution to a problem. This is typically the job of the engineer, and something that any programmer can relate to: do the best you can under the constraints you have.

## Induction

Your company's server crashed at a given time last night. You read the log files and see that there is an "out of memory" error timestamped to the moment of the server crash. Therefore, you diagnose this error as the culprit for the server crash.

The first thing to notice is that the reasoning above is not logically correct. Indeed, we can simplify it as follows:

• The out of memory error happened at time T.
• The server crashed at time T.
• Therefore, the out of memory error caused the server to crash.

Indeed, it can be that case that this error was safely caught and the server crashed because of an electricity problem. However this is plausible reasoning — given two facts that have the same origin (the same moment in time), you managed to establish a causality that is coherent, non-contradicting, that has a chance of being correct. And as developers, we know that very often, this works.

The mechanism of induction [2] is the following: if we know a fact B and we know a fact A, then we induce A implies B as a plausible thing. This can be represented in the following diagram:

Induction is the process that we identify with writing types and science, especially with natural sciences, because it is about establishing constraints and general rules that limit possible behaviors. This is not the only component of science, but it is one of its defining aspects. In the case of the diagram above, for example, the constraint is the established constraint is that A implies B.

We can now connect induction with writing types for a library. What happens when we try to make an action plan based on inductive reasoning? If we take A to correspond to "code is wrong" and B to correspond to "code does not typecheck", we can act using induction as follows:

• I know some code is wrong, and I want to avoid it in production.
• Code that doesn't typecheck doesn't run.
• Therefore, I write types so that this code does not typecheck.

This corresponds to the post-hoc type modelling of an existing codebase, that tries to preserve important invariants and prevent future errors. The establishment of a type system is akin to the creation of a general rule that allows some desirable code examples to be valid, and undesirable ones to not exist.

In summary, induction corresponds to establishing an environment of constraints and rules so that others, while working under these conditions, produce the outcomes that are expected. This is typically the job of managers, and present in many other activities, like teaching and parenting. This is also one of the roles of scientific theories, to constrain the space of possibilities when developing an engineering project.

## Conclusion

Here's the takeaway:

• Even the most well-informed decisions we take are not logical conclusions of facts - they are most often just plausible theories. At minimum, they must not be false, but if possible, we must use the information we have at hand to make the best decision.

• Abduction and induction are simple mechanisms yet omnipresent. Being able to identify whether we are using one of them makes it easier to know how to find the weak points and to better justify our arguments. Abductions and inductions always require arguments.

• Being able to identify where plausible reasoning was used in some decision process can show where possible errors can happen, or where new data can show problems that were not foreseen before.

In another note, what about writing tests? This is also an inductive activity like writing types, they only differ in breadth and power—while types can be imposed not only to developers, but also to users of a library, tests can impose constraints that are unavailable to a type system.

Globally, the message is that programming involves activities of different natures. Sometimes you are just plumbing functions together and you have to think like a mathematician. Sometimes you are writing functions and have to work under constraints and think like an engineer. And sometimes you have to establish frameworks and give appropriate constraints to yourself and your coworkers, work with the evidence you have and think like a scientist. And no part in this game is more or less important than the others. They are different, yet complementary.

1. Benjamin S. Peirce, Philosophical writings of Peirce, 1955 ↩︎

2. Mathematical induction, the technique for proving theorems about integer numbers, for example, is not an example of induction as written here. They share the same name, but they are not related. ↩︎

# Storing generated cabal files

tl;dr: I'm moving towards recommending that hpack-using projects store their generated cabal files in their repos, and modifying Stack and Pantry to more strongly recommend this practice. This is a reversal of previous recommendations. Vote and comment on this proposal.

## Backstory

Stack 2.0 switched over to using the Pantry library to manage dependencies. Pantry does a number of things, but at its core it focuses heavily on reproducibility. The idea is that, with a fully qualified package specification, you should always get the same source code. As an example, https://example.com/foo.tar.gz would not be a fully qualified package specification, because the content in that tarball could silently change without being detected. Instead, with Pantry, you would specify something like:

size: 9526
cabal-file:
size: 1571
name: filelock
version: 0.1.1.2
pantry-tree:
size: 584
sha256: 19914e8fb09ffe2116cebb8b9d19ab51452594940f1e3770e01357b874c65767


Of course, writing these out by hand is tedious and annoying, so Stack uses Pantry to generate these values for you and put them in a lock file.

Separately: Stack has long supported the ability to include hpack's package.yaml files in your source code, and to automate the generation of a .cabal file. There are two quirks we need to pay attention to with hpack:

• The cabal files it generates change from one version to the next. Some of those changes may be semantically meaningful. At the very least, each new version will stamp a different hpack version in the comments of the cabal file.
• hpack generation is a highly I/O-focused activity, looking at all of the files in a package. Furthermore, as I was recently reminded it can refer to files outside of the specific package you're trying to build but inside the same Git repository or tarball.

Finally, Stack and Pantry make a stark distinction between two different kinds of packages. Immutable packages are things which we can assume never change. These would be one of the following:

• A package on Hackage, specified by a name, version number, and information on the Hackage revision
• A tarball or ZIP file given by a file path or URL. While these absolutely can change over time, Pantry makes an explicit recommendation that only immutable packages should be used. And the hashes and file size stored in the lock file provide protection against changes.
• A Git or Mercurial repository, specified by a commit.

On the other hand, mutable packages are packages stored as files on the file system. These are the packages that you are working on in your local project. Reproducibility is far less important here. We allow Stack to regularly check the timestamps and hashes of all of these files and determine when things need to be rebuilt.

## The conflict

There's been a debate for a while around how to manage your packages with Stack and hpack. The question is simple: do you store the generated cabal files in the repo? There are solid arguments in both directions:

• You shouldn't store the file, because generated files should in general not be stored in repositories. This can lead to unnecessary diffs, and when people are using different hpack versions, "commit battles" of the file jumping back and forth between different generated content.
• You should store the file, since for maximum reproducibility we want to ensure that we have identical cabal files as input to the build. Also, for people using build tools without built in support for hpack, it's more convenient to have a cabal file present.

I've had this discussion off and on over the years with many different people, and before Stack 2 had personally settled on the first approach: not storing the cabal files. Then I started working on Pantry.

## Early Pantry

Earlier in the development of Pantry, I made a decision to focus on reproducibility. I quickly ran into a problem with hpack: I needed to be able to tell the package name and version of a package easily, but the only code path I had for that was parsing the cabal file. In order to support hpack files for this, I would need to write the entire package contents to the filesystem, run hpack on the resulting directory, and then parse the generated file.

(I probably could have whipped up something hacky around parsing the hpack YAML file directly, but that felt like a can of worms.)

Performing these steps each time Stack or Pantry needed to know a package name/version would have been prohibitively expensive, so I dismissed the option. I also considered caching the generated cabal file, but since the generated file contents would change version by version, I didn't follow that path, since it would violate reproducibility.

## Current Pantry

An early beta tester of Stack 2.0 complained about this change. While hpack worked perfectly for mutable, local packages, it no longer worked for immutable packages. If you had a Git repository with a package, that repo didn't include the generated cabal file, and you wanted to use that repo as an extra-dep, things would fail. This didn't fail with Stack 1, so this was viewed (correctly) as a regression in functionality.

However, Stack 2 was aiming for caching and reproducibility goals that Stack 1 hadn't achieved. If anyone remembers, Stack 1 had a bad tendency to reclone Git repos far more often than you would think it should need to. Pantry's caching ultimately solved that problem, and did so by relying on reproducibility.

My initial recommendation was to require changing all Git repos used as extra-deps to include the generated cabal files. However, after further discussion with beta testers, we ended up changing Pantry instead. We added the ability to cache the generated cabal files (keyed on the version of hpack used). I was uneasy about this, but ultimately it seemed to work fine, and let us keep the functionality we wanted. So we shipped this in Pantry, in Stack 2, and continued recommending people not include generated cabal files.

## The problems arise

Unfortunately, things were far from rosey. There are now at least three problems I'm aware of with this situation:

• Continuing from before: people using build tools without hpack support are still out of luck with these repos.
• As raised in issue #4906, due to how Pantry handles subdirectories in megarepos, cabal file generation will fail for extra-deps in some cases.
• Lock files have regularly become corrupted by changing generated cabal files. If you use a new version of Stack using a different version of hpack, it will generate a different cabal file, which will change the hashes associated with a package in the lock file. This can cause a lot of frustration between teams, and undermines the whole purpose of lock files in the first place.

There are probably solutions to the second and third problem. But there's definitely no solution to the first short of including the cabal files again.

## Changes

Based on all of this, I'm recommending that we make the following changes:

• Starting immediately: update docs and Stack templates to recommend checking in generated cabal files. This would involve a few minor doc improvements, and removing *.cabal from a few .gitignore files.
• For the next releases of Pantry and Stack, add a warning any time an immutable package does not include a .cabal file. Reference potentially this blog post, and warn that lock files may be broken by doing this.
• Personally: I'll start including the generated cabal files in my repos. Since I have a bunch of them, I'll appreciate people sending PRs to modify my .gitignore files and adding the generated files, as you discover them.

For those who are truly set against including generated cabal files, all is not lost. For those cases, my recommendation would be pretty simple: keep the generated file out of your repository, and then generate a source tarball with stack sdist to be used as an extra-dep. This will essentially mirror the stack upload step you would follow to upload a package to Hackage.

## Next steps

The changes necessary to make this a reality are small, and I'm happy to make the changes myself. I'm opening up a short discussion period for this topic, probably around a week, depending on how the discussion goes. If you have an opinion, please jump over to issue #5210 and either leave an emoji reaction or a comment.

# Competitive Programming in Haskell: modular arithmetic, part 2

In my last post I wrote about modular exponentiation and egcd. In this post, I consider the problem of solving modular equivalences, building on code from the previous post.

# Solving linear congruences

A linear congruence is a modular equivalence of the form

$ax \equiv b \pmod m$.

Let’s write a function to solve such equivalences for $x$. We want a pair of integers $y$ and $k$ such that $x$ is a solution to $ax \equiv b \pmod m$ if and only if $x \equiv y \pmod k$. This isn’t hard to write in the end, but takes a little bit of thought to do it properly.

First of all, if $a$ and $m$ are relatively prime (that is, $\gcd(a,m) = 1$) then we know from the last post that $a$ has an inverse modulo $m$; multiplying both sides by $a^{-1}$ yields the solution $x \equiv a^{-1} b \pmod m$.

OK, but what if $\gcd(a,m) > 1$? In this case there might not even be any solutions. For example, $2x \equiv 3 \pmod 4$ has no solutions: any even number will be equivalent to $0$ or $2$ modulo $4$, so there is no value of $x$ such that double it will be equivalent to $3$. On the other hand, $2x \equiv 2 \pmod 4$ is OK: this will be true for any odd value of $x$, that is, $x \equiv 1 \pmod 2$. In fact, it is easy to see that any common divisor of $a$ and $m$ must also divide $b$ in order to have any solutions. In case the GCD of $a$ and $m$ does divide $b$, we can simply divide through by the GCD (including dividing the modulus $m$!) and then solve the resulting equivalence.

-- solveMod a b m solves ax = b (mod m), returning a pair (y,k) (with
-- 0 <= y < k) such that x is a solution iff x = y (mod k).
solveMod :: Integer -> Integer -> Integer -> Maybe (Integer, Integer)
solveMod a b m
| g == 1         = Just ((b * inverse m a) mod m, m)
| b mod g == 0 = solveMod (a div g) (b div g) (m div g)
| otherwise      = Nothing
where
g = gcd a m

# Solving systems of congruences with CRT

In its most basic form, the Chinese remainder theorem (CRT) says that if we have a system of two modular equations

$\begin{array}{rcl}x &\equiv& a \pmod m \\ x &\equiv& b \pmod n\end{array}$

then as long as $m$ and $n$ are relatively prime, there is a unique solution for $x$ modulo the product $mn$; that is, the system of two equations is equivalent to a single equation of the form

$x \equiv c \pmod {mn}.$

We first compute the Bézout coefficients $u$ and $v$ such that $mu + nv = 1$ using egcd, and then compute the solution as $c = anv + bmu$. Indeed,

$c = anv + bmu = a(1 - mu) + bmu = a - amu + bmu = a + (b-a)mu$

and hence $c \equiv a \pmod m$; similarly $c \equiv b \pmod n$.

However, this is not quite general enough: we want to still be able to say something useful even if $\gcd(m,n) > 1$. I won’t go through the whole proof, but it turns out that there is a solution if and only if $a \equiv b \pmod {\gcd(m,n)}$, and we can just divide everything through by $g = \gcd(m,n)$, as we did for solving linear congruences. Here’s the code:

-- gcrt2 (a,n) (b,m) solves the pair of modular equations
--
--   x = a (mod n)
--   x = b (mod m)
--
-- It returns a pair (c, k) such that all solutions for x satisfy x =
-- c (mod k), that is, solutions are of the form x = kt + c for
-- integer t.
gcrt2 :: (Integer, Integer) -> (Integer, Integer) -> Maybe (Integer, Integer)
gcrt2 (a,n) (b,m)
| a mod g == b mod g = Just (((a*v*m + b*u*n) div g) mod k, k)
| otherwise              = Nothing
where
(g,u,v) = egcd n m
k = (m*n) div g

From here we can bootstrap ourselves into solving systems of more than two equations, by iteratively combining two equations into one.

-- gcrt solves a system of modular equations.  Each equation x = a
-- (mod n) is given as a pair (a,n).  Returns a pair (z, k) such that
-- solutions for x satisfy x = z (mod k), that is, solutions are of
-- the form x = kt + z for integer t.
gcrt :: [(Integer, Integer)] -> Maybe (Integer, Integer)
gcrt []         = Nothing
gcrt [e]        = Just e
gcrt (e1:e2:es) = gcrt2 e1 e2 >>= \e -> gcrt (e:es)

# Practice problems

And here are a bunch of problems for you to practice!

# announcing arduino-copilot

arduino-copilot, released today, makes it easy to use Haskell to program an Arduino. It's a FRP style system, and uses the Copilot DSL to generate embedded C code.

## gotta blink before you can run

import Copilot.Arduino
main = arduino $do led =: blinking delay =: constant (MilliSeconds 100)  Running that Haskell program generates an Arduino sketch in an .ino file, which can be loaded into the Arduino IDE and uploaded to the Arduino the same as any other sketch. It's also easy to use things like Arduino-Makefile to build and upload sketches generated by arduino-copilot. ## shoulders of giants Copilot is quite an impressive embedding of C in Haskell. It was developed for NASA by Galois and is intended for safety-critical applications. So it's neat to be able to repurpose it into hobbyist microcontrollers. (I do hope to get more type safety added to Copilot though, currently it seems rather easy to confuse eg miles with kilometers when using it.) I'm not the first person to use Copilot to program an Arduino. Anthony Cowley showed how to do it in Abstractions for the Functional Roboticist back in 2013. But he had to write a skeleton of C code around the C generated by Copilot. Amoung other features, arduino-copilot automates generating that C skeleton. So you don't need to remember to enable GPIO pin 13 for output in the setup function; arduino-copilot sees you're using the LED and does that for you. frp-arduino was a big inspiration too, especially how easy it makes it to generate an Arduino sketch withough writing any C. The "=:" operator in copilot-arduino is copied from it. But ftp-arduino contains its own DSL, which seems less capable than Copilot. And when I looked at using frp-arduino for some real world sensing and control, it didn't seem to be possible to integrate it with existing Arduino libraries written in C. While I've not done that with arduino-copilot yet, I did design it so it should be reasonably easy to integrate it with any Arduino library. ## a more interesting example Let's do something more interesting than flashing a LED. We'll assume pin 12 of an Arduino Uno is connected to a push button. When the button is pressed, the LED should stay lit. Otherwise, flash the LED, starting out flashing it fast, but flashing slower and slower over time, and then back to fast flashing. {-# LANGUAGE RebindableSyntax #-} import Copilot.Arduino.Uno main :: IO () main = arduino$ do
buttonpressed <- input pin12
delay =: MilliSeconds (longer_and_longer * 2)


This is starting to use features of the Copilot DSL; "buttonpressed || blinking" combines two FRP streams together, and "longer_and_longer * 2" does math on a stream. What a concise and readable implementation of this Arduino's behavior!

Finishing up the demo program is the implementation of longer_and_longer. This part is entirely in the Copilot DSL, and actually I lifted it from some Copilot example code. It gives a reasonable flavor of what it's like to construct streams in Copilot.

longer_and_longer :: Stream Int16
longer_and_longer = counter true $counter true false mod 64 == 0 counter :: Stream Bool -> Stream Bool -> Stream Int16 counter inc reset = cnt where cnt = if reset then 0 else if inc then z + 1 else z z = [0] ++ cnt  This whole example turns into just 63 lines of C code, which compiles to a 1248 byte binary, so there's plenty of room left for larger, more complex programs. ## simulating an Arduino One of Copilot's features is it can interpret code, without needing to run it on the target platform. So the Arduino's behavior can be simulated, without ever generating C code, right at the console! But first, one line of code needs to be changed, to provide some button states for the simulation:  buttonpressed <- input' pin12 [False, False, False, True, True]  Now let's see what it does: # runghc demo.hs -i 5 delay: digitalWrite_13: (2) (13,false) (4) (13,true) (8) (13,false) (16) (13,true) (32) (13,true)  Which is exactly what I described it doing! To prove that it always behaves correctly, you could use copilot-theorem. ## peek at C Let's look at the C code that is generated by the first example, of blinking the LED. This is not the generated code, but a representation of how the C compiler sees it, after constant folding, and some very basic optimisation. This compiles to the same binary as the generated code. void setup() { pinMode(13, OUTPUT); } void loop(void) { delay(100); digitalWrite(13, s0[s0_idx]); s0_idx = (++s0_idx) % 2; }  If you compare this with hand-written C code to do the same thing, this is pretty much optimal! Looking at the C code generated for the more complex example above, you'll see few unnecessary double computations. That's all I've found to complain about with the generated code. And no matter what you do, Copilot will always generate code that runs in constant space, and constant time. Development of arduino-copilot was sponsored by Trenton Cronholm and Jake Vosloo on Patreon. ## February 26, 2020 ### Tweag I/O # Probabilistic Programming with monadâ€‘bayes, Part 3: A Bayesian Neural Network Siddharth Bhat, Simeon Carstens, Matthias Meschede This post is the third instalment of Tweag's Probabilistic Programming with monad‑bayes Series. You can find the previous parts here: Want to make this post interactive? Try our notebook version. It includes a Nix shell, the required imports, and some helper routines for plotting. Let's start modeling! ## Introduction Where we left off, we had learned to see linear regression not as drawing a line through a data set — we have seen it rather as figuring out how likely it is that a line from a whole distribution of lines generates the observed data set. The entire point of this is that once you know how to do this for lines, you can start fitting any model in the same fashion. In this blog post, we shall use a neural network. This will demonstrate one of the great strengths of monad-bayes: it doesn't have a preconceived idea of what a model should look like. It can define distributions of anything that you can define in Haskell. We will need to do some linear algebra computations, which we will do with the hmatrix package. ## Model Setup In our last blog post, we have illustrated that a likelihood model defines a parametrized family of data distributions. In linear regression these data distributions are centered around lines parametrized by their slope, and intercept, with variations around them parametrized by sigma. In this post, we again setup such a likelihood model, but now the distributions aren't centered on lines. Instead, they are centered on the output of a neural network that is parametrized by a weight vector and a bias vector, with a sigma parameter defining variations around the network output. Therefore, our (very simple) neural network will be represented by: data NN = NN { biass :: Vector Double, weights :: Vector Double, sigma :: Double } deriving (Eq, Show)  In a Bayesian approach, a neural network computes, given some input, a probability distribution for possible outputs. For instance, the input may be a picture, and the output a distribution of picture labels of what is in the picture (is it a camel, a car, or a house?). For this blog post, we will consider the x-coordinate as the input, and a distribution of y-coordinates (y-distribution) as the output. This will be represented by the following: data Data = Data { xValue :: Double, yValue :: Double } deriving (Eq, Show)  Let's start by defining the x-dependent mean of the y-distribution (y-mean): forwardNN :: NN -> Double -> Double forwardNN (NN bs ws _) x = ws dot cmap activation (scalar x - bs) where activation x = if x < 0 then 0 else 1  For a given set of neural network parameters NN, forwardNN returns a function from Double to Double, from x to the y-mean of the data distribution. A full y-distribution can easily be obtained by adding normally-distributed variations around the y-mean: errorModel :: Double -> Double -> Double -> Log Double errorModel mean std = normalPdf mean std  The first two arguments of errorModel are the y-mean and y-sigma of the normal distribution. When this normal distribution is evaluated at a position y, which is the third parameter, the errorModel function returns the log-probability. What we've just said in two lengthy sentences can be combined into a single likelihood model like this: likelihood :: NN -> Data -> Log Double likelihood nn (Data xObs yObs) = errorModel yMean ySigma yObs where ySigma = sigma nn yMean = forwardNN nn xObs  This function embodies our likelihood model: for given parameter values NN, it returns a data distribution, a function that assigns a log-probability to each data point. We can, for example, pick a specific neural network: nn = NN { biass=vector [1, 5, 8] , weights=vector [2, -5, 1] , sigma=2.0 }  and then plot the corresponding distribution: points1 = [ (x, y, exp . ln$ likelihood nn (Data x y))
| x <- [0 .. 10]
, y <- [-10 .. 10]
]

vlShow $-- checkout the notebook for the plotting code  We can see that our neural networks computes distributions centered around a step function. The positions of the steps are determined by the biases, while their height is determined by the weights. There is one step per node in our neural network (3 in this example). ## Prior, Posterior and Predictive Distribution Now let's try and train this step-function network. But instead of traditional training, we will find out a whole distribution of neural networks, weighted by how likely they are to generate the observed data. Monad-bayes knows nothing of our NN data type, so it may sound like we have to do something special to teach NN to monad-bayes. But none of that is necessary: monad-bayes simply lets us specify distributions of any data type. In the NN case, this is represented by m NN for some MonadInfer m. In the Bayesian context, training consists in computing a posterior distribution after observing the data in the training set. In standard monad-bayes fashion this is achieved by scoring with the likelihood that a model generates all points in the training set. Haskell's combinators make this very succinct. postNN :: MonadInfer m => m NN -> [Data] -> m NN postNN pr obs = do nn <- pr forM_ obs (score . likelihood nn) return nn  We also need an uninformative prior to initiate the computation. Let's choose a uniform distribution on the permissible parameters. uniformVec :: MonadSample m => (Double, Double) -> Int -> m (Vector Double) uniformVec (wmin, wmax) nelements = vector <$> replicateM nelements (uniform wmin wmax)

priorNN :: MonadSample m => Int -> m NN
priorNN nnodes = do
bias <- uniformVec (0, 10) nnodes
weight <- uniformVec (-10, 10) nnodes
sigma <- uniform 0.5 1.5
return $NN bias weight sigma  Notice how we create a distribution of vectors in uniformVec, as m (Vector Double). As was the case for neural networks, monad-bayes doesn't know anything about vectors. Finally, we can use the posterior distribution to predict more data. To predict a data point, we literally draw uniformly from permissible points, then score them according to the neural network distribution. Monad-bayes ensures that this can be done efficiently. predDist :: MonadInfer m => m NN -> m (NN, Data) predDist pr = do nn <- pr x <- uniform 0 10 y <- uniform (-5) 10 score$ likelihood nn (Data x y)
return (nn, Data x y)


We return the neural network alongside the actual data point, this is mere convenience.

## Some Examples

With this setup, we can infer a predictive data distribution from observations. Let's see how our network handles a line with slope 0.5 and intercept -2:

nsamples = 200
noise <- sampleIOfixed $replicateM nsamples$ normal 0.0 0.5
observations =
[ Data x (0.5 * x - 2 + n)
| (x,n) <- zip [0, (10 / nsamples) ..] noise
]


We can sample from the predictive data distribution with this snippet:

nnodes = 3
mkSampler = prior . mh 60000
predicted <-
sampleIOfixed $mkSampler$ predDist $postNN (priorNN nnodes) observations  And we get this distribution: hist = histo2D (0, 10, 10) (-10, 20, 10) ((\(_, d)-> (xValue d, yValue d)) <$> predicted)
cents = Vec.toList $DH.binsCenters$ DH.bins hist
val = Vec.toList $DH.histData hist vlShow$ plot -- checkout the notebook for the plotting code


The predictive data distribution, shown with a blue histogram, neatly follows the observed blue scatter points. We have thus successfully "fitted" a line with a neural network using Bayesian inference! Of course, the predictive distribution is less precise than if it were, in fact, a line, since our networks' distributions are always in the form of a step function.

Lines are not very interesting, so let's observe a sine wave next:

nsamples = 200
noise <- sampleIOfixed $replicateM nsamples$ normal 0.0 0.5
observations = take nsamples
[ Data x (2 * sin x + 1 + n)
| (x, n) <- zip [0, (10 / nsamples) ..] noise
]

nnodes = 3
mkSampler = prior . mh 60000
predicted <-
sampleIOfixed $mkSampler$ predDist $postNN (priorNN nnodes) observations hist = histo2D (0, 10, 10) (-10, 20, 10) ((\(nn, d) -> (xValue d, yValue d)) <$> predicted)
cents = Vec.toList $DH.binsCenters$ DH.bins hist
val = Vec.toList $DH.histData hist vlShow$ plot -- checkout the notebook for the plotting code


Pretty neat! We can still see the three steps. Still, we get a reasonable approximation of our sine wave.

What if, instead of visualising the data distribution, we observed the distribution of neural networks themselves? That is, the distributions of weights and biases.

ws = mconcat $toList . weights . fst <$> predicted
bs = mconcat $toList . biass . fst <$> predicted

hist = histo2D (-5, 20, 10) (-5, 20, 5) (zip bs ws)
cents = Vec.toList $DH.binsCenters$ DH.bins hist
val = Vec.toList $DH.histData hist vlShow$ plot -- checkout the notebook for the plotting code


The x-axis shows the step positions (biass) and the y-axis shows the step amplitudes (weights). We have trained a three-node (i.e. three-step) neural network, so we see three modes in the histogram: around (0, 2), around (3, -3) and around (6, 2). Indeed, these are the steps that are fitting the sinus. These values are rather imprecise because we are trying to fit a sine wave with step functions.

## Conclusion

In a handful of lines of Haskell, we have trained a simple neural network. We could do this not because monad-bayes has some prior knowledge of neural networks, but because monad-bayes is completely agnostic on the types over which it can sample.

We're not advertising, of course, using this method to train real-life neural network. It's pretty naive, but you will probably agree that it was very short, and hopefully illuminating.

The forwardNN, and errorModel function play roles that are somewhat similar to the roles of the forward model and the loss function in more standard, optimization-based neural network training algorithms.

Real-life neural networks have millions of parameters, and you will need to use more bespoke methods to use them for Bayesian inference. That being said, there are practical implementation based on Bayesian inference, such as this tensorflow/edward example. Such implementations use domain-specific sampling methods.

Stay tuned for future blog posts, where we will further explore the expressiveness of monad-bayes.

# Making nutrition decisions

For various reasons, maybe not great ones, I’ve been experimenting with a new diet plan. I’m not advocating this diet generally, and not even sure if I like it for myself. I’m taking notes on how this goes, and intend to share more information later.

This diet plan is radically different from what I normally eat. Getting together with family, this has led to some real confusion. So I wanted to put together a blog post covering two related topics:

• How do I make decisions about how I’m going to eat?
• What principles do I follow in my eating strategy, regardless of the specific diet plan I’m following?

This post is a bit less structured than some of the others in this series, take it as a bit of a brain dump.

## The authorities

The core about how I make decisions comes down to this: I don’t trust the mainstream authorities to tell me what to eat. As crazy or egotistical as that may sound in a vacuum, this is far from a radical position. I’d argue it’s the only sensible decision given the data: the correlation between health guidelines and modern diseases. Specifically, since the nutrition authorities have started inserting themselves into our food recommendations, the diseases they purport to prevent have become only worse.

This kind of decision leads to a few immediate questions:

I’m feeling better and losing weight, but how do I know if I’m doing long term harm? This is a concern raised often about non-standard diets, and perhaps rightfully so. There’s no long term data on the large scale health effects of a carnivore diet, for instance. That said, there is plenty of data on the long term effects of a standard diet, all bad. My approach: if you’re feeling better, go for it.

It’s working for me, but how do I know if it works for everyone? I’m just one person! Right, you’re just one person. And that’s the only person you need to worry about. If your diet is working for you, follow it. You don’t need something that will work for all members of a population. If you give advice to friends and family, make them responsible for reviewing their own results.

## Observations

Many populations across the planet have had a wide variety of diets over the past hundreds and thousands of years. Most of those populations avoided the major degenerative diseases which plague us today (heart disease, cancer, etc). You may argue that this is because they died of other causes before they could die of those diseases. I encourage you to research the topic more fully; I don’t believe the data says that.

Anyway, these observations introduce what initially appear to be paradoxes: how can you have healthy populations that consume such varied diets as:

• Massively high animal products
• Mostly grains
• Mostly other starch sources
• Combinations of fat and starch
• Hunter gatherers consuming copious amounts of honey

There are two potential answers, both of which I think are true:

• The lifestyles of these groups may have been different
• There’s nothing inherently wrong with any of these diets, and our health issues stem from a different source

Personally, I tend towards believing that the second answer is the stronger one, and mostly true. There are certainly some lifestyle factors we have today that differ meaningfully from other groups. Two strongly touted ones are:

• Hunter-gatherers who consumed large amounts of honey did so during specific times of the year, had periods of famine as well, and were highly physically active. That’s quite a difference from a modern human guzzling Coca Cola in an air conditioned office.
• We may be metabolically damaged from our existing eating patterns, leading to what would have been a healthy diet turning into an unhealthy one

In other words, my gut feeling is that you’re probably safe following any historically accurate diet. But to account for possible lifestyle issues, you may want to hedge a bit and follow diets more well proven to work well in the modern age.

## Paleo/primal

The paleo/primal approach to eating fits in well here. I’ll start by saying that, overall, I’ve had my best successes with health and weight loss on a primal approach, and I strongly encourage it. However, I don’t really buy into the idea that everything introduced since the agricultural revolution is toxic.

## Why I experiment

Based on all that: I think there are lots of healthy eating patterns. My default/baseline diet is mostly a primal, low carb diet, veering towards carnivore. However, I still like to experiment with alternatives. Some reasons:

• Variety is the spice of life. Trying out different food again is fun.
• I like to tinker and see if I can optimize things, such as improving my weight lifting.
• I’m curious from a scientific standpoint as to how different theories work out in practice.
• Finding eating styles that are healthy and easier to adopt than something like keto could be a boon for people’s health in general.

## Constants

That said, I do try to stick to a few constants in any diet that I experiment with. This is based on the principles above and the information I’ve read. I evolve this list over time, but this represents where I’m at right now.

• Seed oils (soybean, corn oil, etc) should be avoided at almost all costs. They are a relatively brand new addition to our diets. Their addition correlates closely to many disease epidemics, and there are plausible mechanistic explanations for how they cause these diseases (see references below).
• Avoid trans fats as well. I put this as second to seed oils only because it’s already well understood. If you see “partially hydrogenated” in an ingredient list, avoid it.
• Added sugar should either be avoided entirely, or at the very least limited. Sugar has been part of the human diet in one form or another for a long while, but the current levels regularly consumed far outpace what we’ve had historically.
• Focus on getting enough protein. It’s necessary, it’s satiating (so helps you avoid overeating), and outside of specific disease states like existing kidney disease, the claims of danger are in my reading without merit.
• Grains have been part of the human diet for a long time, so I find it hard to say that they’re evil. There are some arguments about it, like new strains of wheat having different properties or the refinement process being different these days. But overall, I can’t justify “wheat is evil.” That said, in my experience it’s much easier to overeat on empty calories with grains than without them.
• Even more strongly, I don’t see carbs as inherently evil either. However, I think focusing on fat tends to make more sense in a modern diet.
• Saturated fat isn’t evil. The demonization of coconut oil by the American Heart Association is hard to see as anything but a paid hit job by the seed oil industry.

## References

Here are some videos talking about the seed oil and sugar concerns that I’m sticking to the most:

# Beck-Chevalley

This is a fairly technical article. This article will most likely not have any significance for you if you haven’t heard of the Beck-Chevalley condition before.

## Introduction

When one talks about “indexed (co)products” in an indexed category, it is often described as follows:

Let |\mathcal C| be an |\mathbf S|-indexed category, i.e. a pseudofunctor |\mathbf S^{op} \to \mathbf{Cat}| where |\mathbf S| is an ordinary category. Write |\mathcal C^I| for |\mathcal C(I)| and |f^* : \mathcal C^J \to \mathcal C^I| for |\mathcal C(f)| where |f : I \to J|. The functors |f^*| will be called reindexing functors. |\mathcal C| has |\mathbf S|-indexed coproducts whenever

1. each reindexing functor |f^*| has a left adjoint |\Sigma_f|, and
2. the Beck-Chevalley condition holds, i.e. whenever
$$\require{AMScd}\begin{CD} I @>h>> J \\ @VkVV @VVfV \\ K @>>g> L \end{CD}$$
is a pullback square in |\mathbf S|, then the canonical morphism |\Sigma_k \circ h^* \to g^* \circ \Sigma_f| is an isomorphism.

The first condition is reasonable, especially motivated with some examples, but the second condition is more mysterious. It’s clear that you’d need something more than simply a family of adjunctions, but it’s not clear how you could calculate the particular condition quoted. That’s the goal of this article. I will not cover what the Beck-Chevalley condition is intuitively saying. I cover that in this Stack Exchange answer from a logical perspective, though there are definitely other possible perspectives as well.

Some questions are:

1. Where does the Beck-Chevalley condition come from?
2. What is this “canonical morphism”?
3. Why do we care about pullback squares in particular?

## Indexed Functors and Indexed Natural Transformations

The concepts we’re interested in will typically be characterized by universal properties, so we’ll want an indexed notion of adjunction. We can get that by instantiating the general definition of an adjunction in any bicategory if we can make a bicategory of indexed categories. This is pretty easy since indexed categories are already described as pseudofunctors which immediately suggests a natural notion of indexed functor would be a pseudonatural transformation.

Explicitly, given indexed categories |\mathcal C, \mathcal D : \mathbf S^{op} \to \mathbf{Cat}|, an indexed functor |F : \mathcal C \to \mathcal D| consists of a functor |F^I : \mathcal C^I \to \mathcal D^I| for each object |I| of |\mathbf S| and a natural isomorphism |F^f : \mathcal D(f) \circ F^J \cong F^I \circ \mathcal C(f)| for each |f : I \to J| in |\mathbf S|.

An indexed natural transformation corresponds to a modification which is the name for the 3-cells between the 2-cells in the 3-category of 2-categories. For us, this works out to be the following: for each object |I| of |\mathbf S|, we have a natural transformation |\alpha^I : F^I \to G^I| such that for each |f : I \to J| the following diagram commutes
$$\begin{CD} \mathcal D(f) \circ F^J @>id_{\mathcal D(f)}*\alpha^J>> \mathcal D(f) \circ G^J \\ @V\cong VV @VV\cong V \\ F^I \circ \mathcal C(f) @>>\alpha^I*id_{\mathcal C(f)}> G^I \circ \mathcal C(f) \end{CD}$$
where the isomorphisms are the isomorphisms from the pseudonaturality of |F| and |G|.

Indexed adjunctions can now be defined via the unit and counit definition which works in any bicategory. In particular, since indexed functors consist of families of functors and indexed natural transformations consist of families of natural transformations, both indexed by the objects of |\mathbf S|, part of the data of an indexed adjunction is a family of adjunctions.

Let’s work out what the additional data is. First, to establish notation, we have indexed functor |F : \mathcal D\to \mathcal C| and |U : \mathcal C \to \mathcal D| such that |F \dashv U| in an indexed sense. That means we have |\eta : Id \to U \circ F| and |\varepsilon : F \circ U \to Id| as indexed natural transformations. The first pieces of additional data, then, are the fact that |F| and |U| are indexed functors, so we have natural isomorphisms |F^f : \mathcal C(f)\circ F^J \to F^I\circ \mathcal D(f)| and |U^f : \mathcal C(f) \circ U^J \to U^I \circ \mathcal D(f)| for each |f : I \to J| in |\mathbf S|. The next pieces of additional data, or rather constraints, are the coherence conditions on |\eta| and |\varepsilon|. These work out to
$$\begin{gather} U^I(F^f)^{-1} \circ \eta_{\mathcal D(f)}^I = U_{F^J}^f \circ \mathcal D(f)\eta^J \qquad\text{and}\qquad \varepsilon_{\mathcal C(f)}^I \circ F^I U^f = \mathcal C(f)\varepsilon^J \circ (F_{U^J}^f)^{-1} \end{gather}$$

This doesn’t look too much like the example in the introduction, but maybe some of this additional data is redundant. If we didn’t already know where we end up, one hint would be that |(F^f)^{-1} : F^I \circ \mathcal C(f) \to \mathcal D(f) \circ F^J| and |U^f : \mathcal D(f) \circ U^J \to U^I \circ \mathcal C(f)| look like mates. Indeed, it would be quite nice if they were as mates uniquely determine each other and this would make the reindexing give rise to a morphism of adjunctions. Unsurprisingly, this is the case.

To recall, generally, given adjunctions |F \dashv U : \mathcal C \to \mathcal D| and |F’ \dashv U’ : \mathcal C’ \to \mathcal D’|, a morphism of adjunctions from the former to the latter is a pair of functors |K : \mathcal C \to \mathcal C’| and |L : \mathcal D \to \mathcal D’|, and a natural transformation |\lambda : F’ \circ L \to K \circ F| or, equivalently, a natural transformation |\mu : L \circ U \to U’ \circ K|. You can show that there is a bijection |[\mathcal D,\mathcal C’](F’\circ L, K \circ F) \cong [\mathcal C, \mathcal D’](L \circ U, U’ \circ K)|. Concretely, |\mu = U’K\varepsilon \circ U’\lambda_U \circ \eta’_{LU}| provides the mapping in one direction. The mapping in the other direction is similar, and we can prove it is a bijection using the triangle equalities. |\lambda| and |\mu| are referred to as mates of each other.

In our case, |K| and |L| will be reindexing functors |\mathcal C(f)| and |\mathcal D(f)| respectively for some |f : I \to J|. We need to show that the family of adjunctions and the coherence conditions on |\eta| and |\varepsilon| force |(F^f)^{-1}| and |U^f| to be mates. The proof is as follows:
\begin{align} & U^I \mathcal C(f) \varepsilon^J \circ U^I(F_{U^J}^f)^{-1} \circ \eta_{\mathcal D(f)U^J}^I & \qquad \{\text{coherence of }\eta \} \\ = \quad & U^I \mathcal C(f) \varepsilon^J \circ U_{F^JU^J}^f \circ \mathcal D(f)\eta_{U^J}^J & \qquad \{\text{naturality of }U^f \} \\ = \quad & U^f \circ \mathcal D(f)U^J\varepsilon^J \circ \mathcal D(f)\eta_{U^J}^J & \qquad \{\text{functoriality of }\mathcal D(f) \} \\ = \quad & U^f \circ \mathcal D(f)(U^J\varepsilon^J \circ \eta_{U^J}^J) & \qquad \{\text{triangle equality} \} \\ = \quad & U^f & \end{align}

The next natural question is: if we know |(F^f)^{-1}| and |U^f| are mates, do we still need the coherence conditions on |\eta| and |\varepsilon|? The answer is “no”.
\begin{align} & U_{F^J}^f \circ \mathcal D(f)\eta^J & \qquad \{\text{mate of }U^f \} \\ = \quad & U^I \mathcal C(f) \varepsilon_{F^J}^J \circ U^I(F_{F^J}^f)^{-1} \circ \eta^I_{\mathcal D(f)U^I} \circ \mathcal D(f)\eta^J & \{\text{naturality of }\eta^I \} \\ = \quad & U^I \mathcal C(f) \varepsilon_{F^J}^J \circ U^I(F_{F^J}^f)^{-1} \circ U^I F^I D(f)\eta^J \circ \eta_{\mathcal D(f)}^I & \{\text{naturality of }U^I(F^f)^{-1} \} \\ = \quad & U^I \mathcal C(f) \varepsilon_{F^J}^J \circ U^I\mathcal C(f)F^J\eta^J \circ U^I (F^f)^{-1} \circ \eta_{\mathcal D(f)}^I & \{\text{functoriality of }U^I\mathcal C(f) \} \\ = \quad & U^I \mathcal C(f)(\varepsilon_{F^J}^J \circ F^J\eta^J) \circ U^I(F^f)^{-1} \circ \eta_{\mathcal D(f)}^I & \{\text{triangle equality} \} \\ = \quad & U^I (F^f)^{-1} \circ \eta_{\mathcal D(f)}^I & \end{align}
Similarly for the other coherence condition.

We’ve shown that if |U| is an indexed functor it has a left adjoint exactly when each |U^I| has a left adjoint, |F^I|, and for each |f : I \to J|, the mate of |U^f| with respect to those adjoints, which will be |(F^f)^{-1}|, is invertible. This latter condition is the Beck-Chevalley condition. As you can quickly verify, an invertible natural transformation doesn’t imply that it’s mate is invertible. Indeed, if |F| and |F’| are left adjoints and |\lambda : F’\circ L \to K \circ F| is invertible, then |\lambda^{-1} : K \circ F \to F’ \circ L| is not of the right form to have a mate (unless |F| and |F’| are also right adjoints and, particularly, an adjoint equivalence if we want to get an inverse to the mate of |\lambda|).

## Comprehension Categories

We’ve answered questions 1 and 2 from above, but 3 is still open, and we’ve generated a new question: what is the indexed functor whose left adjoint we’re finding? The family of reindexing functors isn’t indexed by objects of |\mathbf S| but, most obviously, by arrows of |\mathbf S|. To answer these questions, we’ll consider a more general notion of indexed (co)products.

A comprehension category is a functor |\mathcal P : \mathcal E \to \mathbf S^{\to}| (where |\mathbf S^{\to}| is the arrow category) such that |p = \mathsf{cod} \circ \mathcal P| is a (Grothendieck) fibration and |\mathcal P| takes (|p|-)cartesian arrows of |\mathcal E| to pullback squares in |\mathbf S^{\to}|. It won’t be necessary to know what a fibration is, as we’ll need only a few simple examples, but fibrations provide a different, and in many ways better, perspective1 on indexed categories and being able to move between the perspectives is valuable.

A comprehension category can also be presented as a natural transformation |\mathcal P : \{{-}\} \to p| where |\{{-}\}| is just another name for |\mathsf{dom} \circ \mathcal P|. This natural transformation induces an indexed functor |\langle\mathcal P\rangle : \mathcal C \circ p \to \mathcal C \circ \{{-}\}| where |\mathcal C| is an |\mathbf S|-indexed category. We have |\mathcal P|-(co)products when there is an indexed (left) right adjoint to this indexed functor.

One of the most important fibrations is the codomain fibration |\mathsf{cod} : \mathbf S^{\to} \to \mathbf S| which corresponds to |Id| as a comprehension category. However, |\mathsf{cod}| is only a fibration when |\mathbf S| has all pullbacks. In particular, the cartesian morphisms of |\mathbf S^{\to}| are the pullback squares. However, we can define the notion of cartesian morphism with respect to any functor; we only need |\mathbf S| to have pullbacks for |\mathsf{cod}| to be a fibration because a fibration requires you to have enough cartesian morphisms. However, given any functor |p : \mathcal E \to \mathbf S|, we have a subcategory |\mathsf{Cart}(p) \hookrightarrow \mathcal E| which consists of just the cartesian morphisms of |\mathcal E|. The composition |\mathsf{Cart}(p)\hookrightarrow \mathcal E \to \mathbf S| is always a fibration.

## Conclusion

Thus, if we consider the category |\mathsf{Cart}(\mathsf{cod})|, this will consist of whatever pullback squares exist in |\mathbf S|. The inclusion |\mathsf{Cart}(\mathsf{cod}) \hookrightarrow \mathbf S^{\to}| gives us a comprehension category. Write |\vert\mathsf{cod}\vert| for that comprehension category. The definition in the introduction is now seen to be equivalent to having |\vert\mathsf{cod}\vert|-coproducts. That is, the indexed functor |\langle\vert\mathsf{cod}\vert\rangle| having an indexed left adjoint. The Beck-Chevalley condition is what is necessary to show that a family of left (or right) adjoints to (the components of) an indexed functor combine together into an indexed functor.

1. Indexed categories are, in some sense, a presentation of fibrations which are the more intrinsic notion. This means it is better to work out concepts with respect to fibrations and then see what this means for indexed categories rather than the other way around or even using the “natural” suggestions. This is why indexed categories are pseudofunctors rather than either strict or lax functors. For our purposes, we have an equivalence of 2-categories between the 2-category of |\mathbf S|-indexed categories and the 2-category of fibrations over |\mathbf S|.↩︎

# What would Dijkstra do? Proving the associativity of min

This semester I’m teaching a Discrete Mathematics course. Recently, I assigned them a homework problem from the textbook that asked them to prove that the binary $\min$ operator on the real numbers is associative, that is, for all real numbers $a$, $b$, and $c$,

$\min(a, \min(b,c)) = \min(\min(a,b), c)$.

You might like to pause for a minute to think about how you would prove this! Of course, how you prove it depends on how you define $\min$, so you might like to think about that too.

The book expected them to do a proof by cases, with some sort of case split on the order of $a$, $b$, and $c$. What they turned in was mostly pretty good, actually, but while grading it I became disgusted with the whole thing and thought there has to be a better way.

I was reminded of an example of Dijkstra’s that I remember reading. So I asked myself—what would Dijkstra do? The thing I remember reading may have, in fact, been this exact proof, but I couldn’t remember any details and I still can’t find it now, so I had to (re-)work out the details, guided only by some vague intuitions.

Dijkstra would certainly advocate proving associativity of $\min$ using a calculational approach. Dijkstra would also advocate using a symmetric infix operator symbol for a commutative and associative operation, so let’s adopt the symbol $\downarrow$ for $\min$. ($\sqcap$ would also be a reasonable choice, though I find it less mnemonic.)

How can we calculate with $\downarrow$? We have to come up with some way to characterize it that allows us to transform expressions involving $\downarrow$ into something else more fundamental. The most obvious definition would be “$a \downarrow b = a$ if $a \leq b$, and $b$ otherwise”. However, although this is a fantastic implementation of $\downarrow$ if you actually want to run it, it is not so great for reasoning about $\downarrow$, precisely because it involves doing a case split on whether $a \leq b$. This is the definition that leads to the ugly proof by cases.

How else could we define it? The usual more mathematically sophisticated way to define it would be as a greatest lower bound, that is, “$x = a \downarrow b$ if and only if $x \leq a$ and $x \leq b$ and $x$ is the greatest such number, that is, for any other $y$ such that $y \leq a$ and $y \leq b$, we have $y \leq x$.” However, this is a bit roundabout and also not so conducive to calculation.

My first epiphany was that the best way to characterize $\downarrow$ is by its relationship to $\leq$. After one or two abortive attempts, I hit upon the right idea:

$(a \leq b \downarrow c) \leftrightarrow (a \leq b \land a \leq c)$

That is, an arbitrary $a$ is less than or equal to the minimum of $b$ and $c$ precisely when it is less than or equal to both. In fact, this completely characterizes $\downarrow$, and is equivalent to the second definition given above.1 (You should try convincing yourself of this!)

But how do we get anywhere from $a \downarrow (b \downarrow c)$ by itself? We need to somehow introduce a thing which is less than or equal to it, so we can apply our characterization. My second epiphany was that equality of real numbers can also be characterized by having the same “downsets”, i.e. two real numbers are equal if and only if the sets of real numbers less than or equal to them are the same. That is,

$(x = y) \leftrightarrow (\forall z.\; (z \leq x) \leftrightarrow (z \leq y))$

Now the proof almost writes itself. Let $z \in \mathbb{R}$ be arbitrary; we calculate as follows:

$\begin{array}{cl} & z \leq a \downarrow (b \downarrow c) \\ \leftrightarrow & \\ & z \leq a \land (z \leq b \downarrow c) \\ \leftrightarrow & \\ & z \leq a \land (z \leq b \land z \leq c) \\ \leftrightarrow & \\ & (z \leq a \land z \leq b) \land z \leq c \\ \leftrightarrow & \\ & (z \leq a \downarrow b) \land z \leq c \\ \leftrightarrow & \\ & z \leq (a \downarrow b) \downarrow c \end{array}$

Of course this uses our characterization of $\downarrow$ via its relationship to $\leq$, along with the fact that $\land$ is associative. Since we have proven that $z \leq a \downarrow (b \downarrow c)$ if and only if $z \leq (a \downarrow b) \downarrow c$ for arbitrary $z$, therefore $a \downarrow (b \downarrow c) = (a \downarrow b) \downarrow c$.

1. Thinking about it later, I realized that this should not be surprising: it’s just characterizing $\downarrow$ as the categorical product, i.e. meet, i.e. greatest lower bound, in the poset of real numbers ordered by the usual $\leq$.

## February 20, 2020

### Donnacha Oisín Kidney

Posted on February 20, 2020

This post will be quite light on details: I’m trying to gather up all of the material in this series to be a chapter in my Master’s thesis, so I’m going to leave the heavy-duty explanations and theory for that. Once finished I will probably do a short write up on this blog.

That said, the reason I’m writing this post is that in writing my thesis I figured out a nice way to solve the problem I first wrote about in this post. I won’t restate it in its entirety, but basically we’re looking for a function with the following signature:

bft :: Applicative f => (a -> f b) -> Tree a -> f (Tree b)

Seasoned Haskellers will recognise it as a “traversal”. However, this shouldn’t be an ordinary traversal: that, after all, can be derived automatically by the compiler these days. Instead, the Applicative effects should be evaluated in breadth-first order. To put it another way, if we have a function which lists the elements of a tree in breadth-first order:

bfs :: Tree a -> [a]

Then we should have the following identity:

bft (\x -> ([x], x)) t = (bfs t, t)

Using the writer Applicative with the list monoid here as a way to talk about ordering of effects.

There are many solutions to the puzzle (see Gibbons 2015; or Easterly 2019, or any of the posts in this series), but I had found them mostly unsatisfying. They basically relied on enumerating the tree in breadth-first order, running the traversal on the intermediate list, and then rebuilding the tree. It has the correct time complexity and so on, but it would be nice to deforest the intermediate structure a little bit more.

Anyways, the function I finally managed to get is the following:

bft :: Applicative f => (a -> f b) -> Tree a -> f (Tree b)
bft f (x :& xs) = liftA2 (:&) (f x) (bftF f xs)

bftF :: Applicative f => (a -> f b) -> [Tree a] -> f [Tree b]
bftF t = fmap head . foldr (<*>) (pure []) . foldr f [pure ([]:)]
where
f (x :& xs) (q : qs) = liftA2 c (t x) q : foldr f (p qs) xs

p []     = [pure ([]:)]
p (x:xs) = fmap (([]:).) x : xs

c x k (xs : ks) = ((x :& xs) : y) : ys
where (y : ys) = k ks

The Tree is defined like so:

data Tree a = a :& [Tree a]

It has all the right properties (complexity, etc.), and if you stick tildes before every irrefutable pattern-match it is also maximally lazy.

As a bonus, here’s another small function I looked at for my thesis. It performs a topological sort of a graph.

type Graph a = a -> [a]

topoSort :: Ord a => Graph a -> [a] -> [a]
topoSort g = fst . foldr f ([], ∅)
where
f x (xs,s)
| x ∈ s = (xs,s)
| x ∉ s = first (x:) (foldr f (xs, {x} ∪ s) (g x)) 

# References

Easterly, Noah. 2019. “Functions and newtype wrappers for traversing Trees: Rampion/tree-traversals.” https://github.com/rampion/tree-traversals.

# On linear types and exceptions

Arnaud Spiwack

Haskell has exceptions. Therefore, any design for linear types in Haskell will have to deal with exceptions. It might seem impossible to square with this requirement: how can linear types, which require that values be used exactly once, accommodate exceptions, which interrupt my computation?

The mantra to remember, to dispel this feeling, is that “a function f is linear when: if its result is consumed exactly once, then its argument is consumed exactly once”. If the result is consumed multiple times, then the argument is consumed multiple times, if the result is consumed only partially (e.g. because an exception was thrown), then the argument may be consumed only partially (including not at all).

This may, at first, read as a carefully worded misdirection. But I assure you that it isn't. This conditional statement was already a property of linear logic when it was introduced in 1987. And it is indeed one of the key intuitions for how Linear Haskell interacts with exceptions.

## Back to basics with monads

The design of Linear Haskell was deliberately chosen to closely follow linear logic. This is due to an observation called the Curry-Howard correspondence, by which types in some programming languages can be read like propositions in logic. This observation has served programming language design well, and is in fact one of the original design principles of Haskell (by the way, did you know what Curry's first name was?). Haskell corresponds to intuitionistic logic, while Linear Haskell corresponds to linear logic.

If we were to design exceptions for a linearly typed language from first principles, we would start from linear logic and add exceptions on top. The methodology to do so was given to us by Eugenio Moggi: apply a monad. Indeed neither intuitionistic logic nor linear logic have a native notion of exceptions. In fact, when viewed as programming languages using the Curry-Howard correspondence, they only have terminating computations. But Moggi showed how to use monads to model effectful computations.

This has since become routine in Haskell programming, and you probably guessed where this is going: the simplest model of exception, for intuitionistic logic, is the Maybe monad.

However, in linear logic, there are several refinements of the Maybe type. Which one models exceptions? Interestingly enough, not the one called Maybe in Linear Haskell (_⊕1 in linear logic). In fact, if you read my previous blog post, you will know already that it isn't the right kind of monad to apply here. Instead we need to use the _⊕⊤ monad from linear logic, where ⊤ can be defined in Linear Haskell as

newtype Top = Top (forall b. Void #-> b)


The difference between Top and () (aka 1 in the linear logic literature) is that there is a linear function a #-> Top for every type a:

swallow :: a #-> Top
swallow x = Top $\v -> case v of {}  But, swallow doesn't use its argument! How can it be a linear function? Remember the mantra: “a function f is linear when if its result is consumed exactly one, then its argument is consumed exactly once”. The trick is that there is no way, in linear logic, to consume a value of type ⊤ exactly once. Indeed neither in Linear Haskell is there a way to consume a value of type Top exactly once (since there is no way to apply the wrapped function exactly once). So, vacuously, swallow will consume its argument exactly once when its result is consumed exactly once. We can use the type Either Top a to model in Haskell potentially failing computations that return a value of type a. Using swallow, we can write a computation that errors out, ignoring all the linearity requirements which we would otherwise need to honour. Since we can't consume values of type Top exactly once, we can't catch exceptions in a linear computation. We will need an unrestricted computation instead: catch :: Either Top a -> a -> a catch (Left _) handler = handler catch (Right x) _ = x  So the Curry-Howard correspondence, the bridge between logic and programming, compels us to have a catch without linear arguments. Intuitively, this prevents linear variables from escaping outside of the catch. ## Resource management Now that we know how to model exceptions in linear functions, let us turn to how exceptions interact with applications of linear types. More specifically, how do exceptions interact with resource management? Resource management is one of the original motivations for Linear Haskell. In a recent blog post, we described, for instance, how we use linear types to manage references across two different garbage collectors to avoid memory leaks in inline-java applications. The point of linear types for resource management is that types force us to call the release function on our resources to free them, allowing for precise, yet safe, management of the resource. We just can't forget to call release. But it may seem that exceptions throw a wrench in this plan: since we can interrupt the computation at any time, we can entirely bypass the call to release. However, Linear Haskell doesn't have any notion of resource or resource management. Linear Haskell is only a type system, and doesn't extend the compiler with new concepts. Quite the contrary: the philosophy of Linear Haskell is to empower the programmer to add new abstractions in user space (i.e. without requiring additional compiler support). So the question isn't, does Linear Haskell correctly manage resources, but instead: is it possible to write a resource management abstraction in Linear Haskell? And to this, the answer is an emphatic yes. I can make such an unabashed claim because, quite simply, I wrote one. The high-level view is: • there is the resource monad RIO in which resources are managed, • each resource type has acquire and release functions, • resources are linear values, • the function run :: RIO (Unrestricted a) -> IO a (notice the unrestricted arrow), is responsible for itself releasing all remaining resources if an exception occurs. This way, resource management is entirely under the programmer's control: resources are released when the appropriate function is called. The programmer can't forget to release a resource, or use it after releasing, since it is prevented by the type system. If an exception occurs, then all the resources are cleaned immediately. To reiterate, saying that linear types are exception-safe, or on the contrary exception-oblivious, doesn't make sense. Whether system resources are always released in a timely manner even in the face of exceptions is a property of the abstraction you create using linear types to enable programmers to have full but safe control over resources. The resource monad is one such abstraction, just like the Either Top monad is an abstraction, with different properties, for modeling potentially failing computations. The same is true of virtually any other type system feature like higher-ranked types, GADTs, dependent types: they are tools to build abstractions with desirable properties. ## Thoughts about affine types It is tempting to say that since computations can be interrupted by exceptions, this system is an affine type system. This is misleading. An affine function is such that if its result is used exactly once then its argument is used at most once. It's a very different system. For instance, for resource management, there is no guarantee that a resource is released before the entire computation has ended. We could be waiting a long time. It's harder to create an abstraction that provides the same properties as above. There is indeed some connection between affine types and exceptions. Most notably, catch can be affine in an affine type system. But wrapping linear logic in the Either Top monad doesn't make it affine. In fact, affinity is not a monadic effect: it's a comonadic coeffect. But this is a story for another time. If you grab me over tea, you can easily get me to talk about it. ## Conclusion I hope to have convinced you that the interaction of linearity and exceptions in Linear Haskell, as it is currently designed, is not only reasonable: it is natural and necessary. It doesn't mean that exceptions are free: when writing a new linear abstraction using unsafe functions (such as the FFI), it is your responsibility to ensure that the functions you write are indeed linear—just as when using unsafePerformIO, you need to make sure that the computation really is pure. And when doing so, you need to be mindful of exceptions, which can complicate the implementation. But Haskell has exceptions, and this complication is unavoidable. ## February 15, 2020 ### Donnacha Oisín Kidney # Typing TABA Posted on February 15, 2020 Tags: Haskell Just a short one again today! There’s an excellent talk by Kenneth Foner at Compose from 2016 which goes through a paper by Danvy and Goldberg (2005) called “There and Back Again” (or TABA). You should watch the talk and read the paper if you’re in any way excited by the weird and wonderful algorithms we use in functional languages to do simple things like reversing a list. The function focused on in the paper is one which does the following: zipRev :: [a] -> [b] -> [(a,b)] zipRev xs ys = zip xs (reverse ys) But does it in one pass, without reversing the second list. It uses a not-insignificant bit of cleverness to do it, but you can actually arrive at the same solution in a pretty straightforward way by aggressively converting everything you can to a fold. The result is the following: zipRev :: [a] -> [b] -> [(a,b)] zipRev xs ys = foldl f b ys xs where b _ = [] f k y (x:xs) = (x,y) : k xs I have written a little more on this function and the general technique before. The talk goes through the same stuff, but takes a turn then to proving the function total: our version above won’t work correctly if the lists don’t have the same length, so it would be nice to provide that guarantee in the types somehow. Directly translating the version from the TABA paper into one which uses length-indexed vectors will require some nasty, expensive proofs, though, which end up making the whole function quadratic. The solution in the talk is to call out to an external solver which gives some extremely slick proofs (and a very nice interface). However, yesterday I realised you needn’t use a solver at all: you can type the Haskell version just fine, and you don’t even need the fanciest of type-level features. As ever, the solution is another fold. To demonstrate this rather short solution, we’ll first need the regular toolbox of types: data Nat = Z | S Nat data Vec (a :: Type) (n :: Nat) where Nil :: Vec a Z (:-) :: a -> Vec a n -> Vec a (S n) And now we will write a length-indexed left fold on this vector. The key trick here is that the type passed in the recursive call changes, by composition: newtype (:.:) (f :: b -> Type) (g :: a -> b) (x :: a) = Comp { unComp :: f (g x) } Safe coercions will let us use the above type safely without a performance hit, resulting in the following linear-time function: foldlVec :: forall a b n. (forall m. a -> b m -> b (S m)) -> b Z -> Vec a n -> b n foldlVec f b Nil = b foldlVec f b (x :- xs) = unComp (foldlVec (c f) (Comp (f x b)) xs) where c :: (a -> b (S m) -> b (S (S m))) -> (a -> (b :.: S) m -> (b :.: S) (S m)) c = coerce {-# INLINE c #-} Now, to write the reversing zip, we need another newtype to put the parameter in the right place, but it is straightforward other than that. newtype VecCont a b n = VecCont { runVecCont :: Vec a n -> Vec (a,b) n } revZip :: Vec a n -> Vec b n -> Vec (a,b) n revZip = flip$ runVecCont .
foldlVec
(\y k -> VecCont (\(x :- xs) -> (x,y) :- runVecCont k xs))
(VecCont (const Nil))

Danvy, Olivier, and Mayer Goldberg. 2005. “There and Back Again.” BRICS Report Series 12 (3). doi:10.7146/brics.v12i3.21869. https://tidsskrift.dk/brics/article/view/21869.

# A Working Linux DAW

I’ve recently been watching Guy Michelmore’s youtube videos on composing music. “That looks pretty easy” I thought to myself, which led to accidentally buying a Native Instruments M32 and attempting to compose music for myself.

As it happens, writing music is much harder than I gave it credit for. But an overwhelming amount of that difficulty is for bullshit reasons. You see, for whatever reason, the world of digital music production is a world awash in stupid dial UIs and proprietary software.

When I say proprietary software, I don’t just mean the mixing software itself. I also mean all of the drivers for the peripherals. I also also mean all of the digital instruments. Extremely frustratingly, I also also also mean even the software to download this stuff. EVEN THOUGH IT’S ALL JUST OVER HTTP ANYWAY!

Anyway, I thought I should probably write down the things I’ve learned to hopefully help keep future linux musicians sane.

## Digital Audio Workstation (DAW)

Go with REAPER DAW.

I started with LMMS because a quick search for “linux daw” suggested I use it. After a few days of learning it, this turned out to be a bad idea. The UI is frustrating, and instrument plugins don’t work very well.

REAPER, on the other hand, feels really good. Once you get it working. I had a few issues getting sound working. I had to choose the “ALSA” backend, and turn off the “auto-disable PulseAudio” setting. Unfortunately the PulseAudio backend was introducing ~200ms of latency between hitting notes and hearing them. Try using the ALSA backend if you are experiencing sound latency.

You can use the ReaSynth virtual instrument to try testing your audio.

## Audio Plugins (VSTs)

Out of the box, REAPER is pretty shit. It doesn’t come with anything good, and so we’ll need to add some before we can get to making music. There are lots of great VSTs out there, but almost all of them are Windows-only. But fear not!

If you install wine-staging, you can use it to download some good, free instruments from Spitfire LABS. You’ll need to sign up for an account, install the (ahem) propriety software, and then press the “GET” button on the LABS website. That… somehow authorizes your account, and then the proprietary software will let you download your files.

Particularly resourceful readers can also probably find copies of Massive and Kontakt too.

Make sure you get the 32bit Windows editions of any VSTs you find.

Now, none of these VSTs actually work in REAPER, but thankfully, there’s a program called Airwave that can convert Windows VSTs into Linux ones. Move your 32bit VST .dlls into ~/.wine/drive_c, then ask Airwave to install them into ~/.vst for you. Make sure this is the VST path for REAPER.

Back in REAPER, press CTRL+P and then Plugins > VST. Make sure the VST plugin path says ~/.vst, and then hit the Re-scan button. If you’re lucky, you should now be able to right-click in the main window and click “Insert virtual instrument on new track” and find your new VSTs under All Plugins > New.

## MIDI Controller

My M32 keyboard worked out of the box, sorta. The keys play keys in REAPER, and the dials are successfully recognized as inputs. But none of the useful “DAW control” buttons work. More proprietary software, hidden behind enough bullshit that I couldn’t even find it to test if it worked under wine.

I would not recommend the NI M32 due to the amount of bullshit their lack of Linux support put me through.

But if you’re stuck with one… I decided to reverse engineer the protocol and write a little driver. This program requires xdotool, and maps button presses on the M32 into keypresses. At time of writing, it just types regular English characters — unfortunate because they’re likely to conflict with other bindings, but REAPER doesn’t recognize most linux keysyms. Also, it only intermittently recognizes high-ASCII characters. What a piece of shit. I spent four hours today fighting with this.

This is the critical path I took from not knowing anything about music production to having a mostly-working setup. Knowing what I do now, it would only take 30 minutes to setup, but this blog post is the culmination of about a week of pain! Not all of it was bad though — I got to learn a lot about reverse engineering, and expect a blog post on that in the near future!

# Decimal Safety Right on The Money

Fixed point decimal numbers are used for representing all kinds of data: percentages, temperatures, distances, mass, and many others. I would like to share an approach for safely and efficiently representing currency data in Haskell with safe-decimal.

## Problems we want to solve

### Floating point

I wonder how much money gets misplaced because programmers choose a floating point type for representing money. I will not attempt to convince you that using Double or Float for monetary values is unacceptable, it is a known fact. Values like NaN, +/-Infinity and +/-0 have no meaning in handling money. In addition, the inability to represent most decimal values exactly should be enough reason to avoid floating point.

### Fixed point decimal

Floating point types make sense when numerical approximation acceptable and you care primarily about performance rather than correctness. This is most common in numerical analysis, signal processing and other areas alike. In many other circumstances a type capable of representing decimal numbers exactly should be used instead. Unlike floating point in a Decimal type we manually restrict how many digits after the decimal point we can have. This is called fixed-point number representation. We use fixed-point numbers on a daily basis when paying in the store with cash or card, tracking distance with an odometer, and reading values off of a digital hydrometer or thermometer.

We can represent fixed-point decimal numbers in Haskell by using an integral type for the actual value, which is called a precision, and a scale parameter, which is used for keeping track of how far from the right the decimal point is. In safe-decimal we define a Decimal type that allows us to choose a precision (p) and supply our s scale parameter with the type level natural number:

newtype Decimal r (s :: Nat) p = Decimal p
deriving (Ord, Eq, NFData, Functor, Generic)


Unlike floating point numbers we cannot move our decimal point without changing the scaling parameter and sometimes the precision as well. This means that when we use operations like multiplication or division we might have to do some rounding. The rounding strategy is selected at the type level with the r type variable. At time of writing the most common rounding strategies have been implemented: RoundHalfEven, RoundHalfUp, RoundHalfDown, RoundDown and RoundToZero. There is a plan to add more in the future.

### Precision

It is common to use a type like Integer for decimal representation, for straightforward reasons:

• Integer is easy to use
• Integer can represent any number in the universe, if you have enough memory

Let's look at an example which starts with enabling an extension in Haskell. We need to turn on DataKinds so that we can use type level natural numbers.

>>> :set -XDataKinds
>>> x = Decimal 12345 :: Decimal RoundHalfUp 4 Integer
>>> x
1.2345
>>> x * 5
6.1725
>>> roundDecimal (x * 5) :: Decimal RoundHalfUp 3 Integer
6.173


The concrete Decimal type backed by Integer has a Num instance. That is why we were able to use literal 5 and GHC converted it to a Decimal for us. This is how the same numbers multiplied together look as Double:

>>> 1.2345 * 5 :: Double
6.172499999999999


### Storage and Performance

Integer is nice, but in some applications Integer isn't an acceptable representation of our data. We might need to store decimal values in database, transmit them over the network, or improve performance by storing numbers in an unboxed instead of boxed array. It is faster to store a 64-bit integer value in a database rather than converting a number to a sequence of bytes in a blob as is necessary with Integer. Transmission over a network is another limitation that comes to mind. Having a 508 byte limit on a UDP packet can quickly become a problem for Integer based values.

The best way to solve this is to use fixed width integer types such as Int64, Int32, Word64, etc. If precision of more than 64 bits is desired there are packages that provide 128-bit, 256-bit, and other variants of signed/unsigned integers. All of them can be used with safe-decimal, eg:

>>> import Data.Int (Int8, Int64)
>>> Decimal 12345 :: Decimal RoundHalfUp 6 Int64
0.012345
>>> Decimal 123 :: Decimal RoundHalfUp 6 Int8
0.000123


### Bounds

Even discarding the desire for better performance and ignoring the memory constraints imposed on us, there are often types that have domain-specific bounds anyway. The most common example is when people use signed types like Int to represent values that have no sensible negative value. Use unsigned types like Word for representing values that should have no negative value.

Some values that can be represented by a decimal number have a lower and upper bound that we estimate. Percentages go from 0% to a 100%, the total circulation of US dollars is about 14 trillion, and the surface temperature of a star is somewhere in a range of 225-40000K. If we use our domain specific knowledge we can come up with some safe bounds, instead of blindly assuming that we need infinitely large values.

Beware though, that using integral types with bounds come with real danger: integer overflow and underflow. These are common reasons for bugs in software that lead to a whole variety of exploits. This is the area where protection in safe-decimal really shines, and here is an example of how it protects you:

>>> 123 + 4 :: Int8
127
>>> 123 + 5 :: Int8
-128
>>> x = Decimal 123 :: Decimal RoundHalfUp 6 Int8
>>> x
0.000123
>>> plusDecimalBounded x (Decimal 4) :: Maybe (Decimal RoundHalfUp 6 Int8)
Just 0.000127
>>> plusDecimalBounded x (Decimal 5) :: Maybe (Decimal RoundHalfUp 6 Int8)
Nothing


### Runtime exceptions

We know that division by zero will result in DivideByZero exception:

>>> 1 div 0 :: Int
*** Exception: divide by zero


Less well known is that while some integral operations result in silent overflows, others will cause runtime exceptions:

>>> -1 * minBound :: Int
-9223372036854775808
>>> 1 div minBound :: Int
-1
>>> minBound div (-1) :: Int
*** Exception: arithmetic overflow


Floating point values also have a sad story for division by zero. You'd be surprised how often you can stumble upon those values online:

>>> 0 / 0 :: Double
NaN
>>> 1 / 0 :: Double
Infinity
>>> -1 / 0 :: Double
-Infinity


Long story short we want to be able to prevent all these issues from within pure code. Which is exactly what safe-decimal will do for you:

>>> -1 * pure minBound :: Arith (Decimal RoundHalfUp 2 Int)
ArithError arithmetic overflow
>>> pure minBound / (-1) :: Arith (Decimal RoundHalfUp 2 Int)
ArithError arithmetic overflow
>>> 1 / 0 :: Arith (Decimal RoundHalfUp 2 Int)
ArithError divide by zero


Arith is a monad defined in safe-decimal and is used for working with arithmetic operations that can fail for any particular reason. It is isomorphic to Either SomeException, which means there is straightforward conversion from Arith monad to others that have MonadThrow instance with arithM and a few other helper functions:

>>> arithM (1 / 0 :: Arith (Decimal RoundHalfUp 2 Int))
*** Exception: divide by zero
>>> arithMaybe (1 / 0 :: Arith (Decimal RoundHalfUp 2 Int))
Nothing


## Decimal for crypto

At the beginning of the post I mentioned that we will implement a currency. Everyone seems to be implementing cryptocurrencies nowadays, so why don't we do the same?

The most popular cryptocurrency at time of writing is Bitcoin, so we'll use it for this example. A few assumptions we are going to make before we start:

• The maximum amount is 21M BTC
• No negative amounts are allowed
• Precision is up to 8 decimal places
• Smallest expressible value is 0.00000001 BTC, which is one Satoshi. It is named after the pseudonymous Satoshi Nakamoto who published the seminal Bitcoin paper.

### Definition

Here we'll demonstrate how we can represent Bitcoin with safe-decimal and in case if you would like to follow along here is the gist with all of the code presented in this blogpost. First, we declare the raw amount Satoshi that will be used, so we can specify its bounds. Following that is the Bitcoin wrapper around the Decimal that specifies all we need to know in order to operate on this currency:

{-# LANGUAGE DataKinds #-}
{-# LANGUAGE NumericUnderscores #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}

module Bitcoin (Bitcoin) where

import Data.Word
import Numeric.Decimal
import Data.Coerce

newtype Satoshi = Satoshi Word64 deriving (Show, Eq, Ord, Enum, Num, Real, Integral)

instance Bounded Satoshi where
minBound = Satoshi 0
maxBound = Satoshi 21_000_000_00000000

data NoRounding

type BitcoinDecimal = Decimal NoRounding 8 Satoshi

newtype Bitcoin = Bitcoin BitcoinDecimal deriving (Eq, Ord, Bounded)

instance Show Bitcoin where
show (Bitcoin b) = show b


Important parts of these definitions are:

• We are using a newtype wrapper around Word64 with custom bounds, so that the library can protect us from creating an invalid value. Using Int64 would have not made a difference in this case, but using another type with less available bits would not be enough to hold large values.
• We define no rounding strategy to make sure that at no point rounding could cause money to appear or disappear.
• We do not export the constructor for Bitcoin type to ensure that invalid values cannot be constructed manually. Smart constructors will follow below, which can be exported if needed.

### Construction and arithmetic

Helper functions that do zero cost coercions from Data.Coerce will be used to go between types without making us repeat their signatures.

toBitcoin :: BitcoinDecimal -> Bitcoin
toBitcoin = coerce

fromBitcoin :: Bitcoin -> BitcoinDecimal
fromBitcoin = coerce

mkBitcoin :: MonadThrow m => Rational -> m Bitcoin
mkBitcoin r = Bitcoin <$> fromRationalDecimalBoundedWithoutLoss r plusBitcoins :: MonadThrow m => Bitcoin -> Bitcoin -> m Bitcoin plusBitcoins b1 b2 = toBitcoin <$> (fromBitcoin b1 plusDecimalBounded fromBitcoin b2)

minusBitcoins :: MonadThrow m => Bitcoin -> Bitcoin -> m Bitcoin
minusBitcoins b1 b2 = toBitcoin <$> (fromBitcoin b1 minusDecimalBounded fromBitcoin b2)  mkBitcoin gives us a way to construct new values, while giving us a freedom to choose the monad in which we want to fail by restricting to MonadThrow, for simplicity we'll stick to IO, but it could just as well be Maybe, Either, Arith and many others. >>> mkBitcoin 1.23 1.23000000 >>> mkBitcoin (-1.23) *** Exception: arithmetic underflow  Examples below make it obvious that we are guarded from constructing invalid values from Rational: >>> :set -XNumericUnderscores >>> mkBitcoin 21_000_000.00000000 21000000.00000000 >>> mkBitcoin 21_000_000.00000001 *** Exception: arithmetic overflow >>> mkBitcoin 0.123456789 *** Exception: PrecisionLoss (123456789 % 1000000000) to 8 decimal spaces  Same logic goes for operating on Bitcoin values. Nothing gets past, any operation that could produce an invalid value will result in a failure. >>> balance <- mkBitcoin 10.05 >>> receiveAmount <- mkBitcoin 2.345 >>> plusBitcoins balance receiveAmount 12.39500000 >>> maliciousReceiveBitcoin <- mkBitcoin 20999990.0 >>> plusBitcoins balance maliciousReceiveBitcoin *** Exception: arithmetic overflow >>> arithEither$ plusBitcoins balance maliciousReceiveBitcoin
Left arithmetic overflow


Subtracting values is handled in the same fashion. Note that going below a lower bound will be reported as underflow, which, contrary to popular belief, is a real term not only for floating points, but for integers as well.

>>> balance <- mkBitcoin 10.05
>>> sendAmount <- mkBitcoin 1.01
>>> balance minusBitcoins sendAmount
9.04000000
>>> sendAmountTooMuch <- mkBitcoin 11.01
>>> balance minusBitcoins sendAmountTooMuch
*** Exception: arithmetic underflow
>>> sendAmountMalicious <- mkBitcoin 184467440737.09551616
*** Exception: arithmetic overflow


I would like to emphasize in the example above the fact that we did not have to check if balance was sufficient enough for the amounts to be fully deducted from it. This means we are automatically protected from incorrect transactions as well as very common attack vectors, some of which really did happen with Bitcoin and other cryptocurrencies.

### Num and Fractional

Using a special smart constructor is cool and all, but it would be cooler if we could use our regular math operators to work with Bitcoin values and utilize GHC desugarer to automatically convert numeric literal values too. For this we need instances of Num and Fractional. We can't create instances like that:

instance Num Bitcoin where
...
instance Fractional Bitcoin where
...


because then we would have to use partial functions for failures, which is exactly what we want to avoid. Moreover some functions simply do no make sense for monetary values. Multiplying or dividing Bitcoins together, is simply undefined. We'll have to represent a special type of failure through an exception. This is a bit unfortunate, but we'll go with it anyways:

data UnsupportedOperation =
UnsupportedMultiplication | UnsupportedDivision
deriving Show

instance Exception UnsupportedOperation

instance Num (Arith Bitcoin) where
(+) = bindM2 plusBitcoins
(-) = bindM2 minusBitcoins
(*) = bindM2 (\_ _ -> throwM UnsupportedMultiplication)
abs = id
signum mb = fmap toBitcoin . signumDecimalBounded . fromBitcoin =<< mb
fromInteger i = toBitcoin <$> fromIntegerDecimalBoundedIntegral i instance Fractional (Arith Bitcoin) where (/) = bindM2 (\_ _ -> throwM UnsupportedDivision) fromRational = mkBitcoin  It is important to note that defining the instances above is strictly optional and exporting helper functions that perform the same operations is preferable. We have the instances now so we can demonstrate their use: >>> 7.8 + 10 - 0.4 :: Arith Bitcoin Arith 17.40000000 >>> 7.8 - 10 + 0.4 :: Arith Bitcoin ArithError arithmetic underflow >>> 7.8 * 10 / 0.4 :: Arith Bitcoin ArithError UnsupportedMultiplication >>> 7.8 / 10 * 0.4 :: Arith Bitcoin ArithError UnsupportedDivision >>> 7.8 - 7.7 + 0.4 :: Arith Bitcoin Arith 0.50000000 >>> 0.4 - 7.7 + 7.8 :: Arith Bitcoin ArithError arithmetic underflow  The order of operations can play tricks on you, which probably serves as another reason to stick to exporting functions: mkBitcoin, plusBitcoins, minusBitcoins and whatever other operations we might need. Let's take a look at a more realistic example where the amount sent is supplied to us as a Scientific value likely from some JSON object and we want to update the balance of our account. For simplicity's sake I will use a State monad, but same approach will work just as well with whatever stateful setup you have. newtype Balance = Balance Bitcoin deriving Show sendBitcoin :: MonadThrow m => Balance -> Scientific -> m (Bitcoin, Balance) sendBitcoin startingBalance rawAmount = flip runStateT startingBalance$ do
amount <- toBitcoin <$> fromScientificDecimalBounded rawAmount Balance balance <- get newBalance <- minusBitcoins balance amount put$ Balance newBalance
pure amount


Usage of this simple function will demonstrate us the power of the approach taken in the library as well as its limitations:

>>> balance <- mkBitcoin 10.05
>>> sendBitcoin (Balance balance) 0.5
(0.50000000,Balance 9.55000000)
>>> sendBitcoin (Balance balance) 1e-6
(0.00000100,Balance 10.04999900)
>>> sendBitcoin (Balance balance) 1e+6
*** Exception: arithmetic underflow
>>> arithEither $sendBitcoin (Balance balance) (-1) Left arithmetic underflow  We witness Overflow/Underflow errors as expected, but we get almost no information on where exactly the problem occurred and which value was responsible for it. This is something that can be fixed with customized exceptions, but for now we do achieve the most important goal, namely protecting our calculations from all the dangerous problems without doing any explicit checking. Nowhere in sendBitcoin did we have to validate our input, output, or intermediate values. Not a single if then else statement. This is because all of the information needed to determine the validity of the above operations was encoded into the type and the library enforces that validity for the programmer. ### Mixing Decimal types Although multiplying two Bitcoin values makes no sense, computing the product of an amount and a percentage makes perfect sense. So, how do we go about multiplying different decimals together? While demonstrating interoperability of different decimal types we'd like to also show how higher precision integrals can be used with Decimal. In this example we'll use a Word128 backed Decimal for computing future value. There are a couple of packages that provide 128-bit integral types and it doesn't matter which one it comes from. Our goal is to compute the savings account balance at 1.9% APY (Annual Percentage Yield) in 30 days if you start with 10,000 BTC and add 10 BTC each day. We will start by defining the rounding strategy implementation for the Word128 type and specifying the Decimal type we will be using for computation: instance Round RoundHalfUp Word128 where roundDecimal = roundHalfUp type CDecimal = Decimal RoundHalfUp 33 Word128  This is not the implementation of FV (Future Value) function as it is known in finance. It is a direct translation of how we think the accrual of interest works. In plain English we can say that to compute balance of the account tomorrow, we take balance we have today, multiply it by the daily interest rate and add it to the today's balance together with the amount we promised to top up daily. futureValue :: MonadThrow m => CDecimal -> CDecimal -> CDecimal -> Int -> m CDecimal futureValue startBalance dailyRefill apy days = do dailyScale <- -- apy is in % and the year of 2020 is a leap year fromIntegralDecimalBounded (100 * 366) dailyRate <- divideDecimalBoundedWithRounding apy dailyScale let go curBalance day | day < days = do accruedDaily <- timesDecimalBoundedWithRounding curBalance dailyRate nextDayBalance <- sumDecimalBounded [curBalance, accruedDaily, dailyRefill] go nextDayBalance (day + 1) | otherwise = pure curBalance go startBalance 0  The above implementation works on the CDecimal type. What we need to calculate is Bitcoin. This means we have to do some type conversions and scaling in order to match up the types of futureValue function. Then we do some rounding and conversion again to reduce precision to obtain the new Balance: futureValueBitcoin :: MonadThrow m => Balance -> Bitcoin -> Rational -> Int -> m (Balance, CDecimal) futureValueBitcoin (Balance (Bitcoin balance)) (Bitcoin dailyRefill) apy days = do balance' <- scaleUpBounded (fromIntegral <$> castRounding balance)
dailyRefill' <- scaleUpBounded (fromIntegral <$> castRounding dailyRefill) apy' <- fromRationalDecimalBoundedWithoutLoss apy endBalance <- futureValue balance' dailyRefill' apy' days endBalanceRounded <- integralDecimalToDecimalBounded (roundDecimal endBalance) pure (Balance$ Bitcoin $castRounding endBalanceRounded, endBalance)  Now we can compute what our balance will be in 30 days: computeBalance :: Arith (Balance, CDecimal) computeBalance = do balance <- Balance <$> 10000
topup <- 10
futureValueBitcoin balance topup 1.9 30


Let's see what values we get and how they compares to the actual FV function that works on Double (for the curious here is one possible implementation numpy.fv)

>>> fst <$> arithM computeBalance Balance 10315.81142818 >>> fv (1.9 / 36600) 30 (-10) (-10000) 10315.811428177167  That's pretty good. We get the accurately rounded result of our new balance. But how accurate is the computed result before the rounding is applied? As accurate as 128 bits can do in presence of rounding: >>> snd <$> arithM computeBalance
10315.811428176906130029412612348658890


We get much better accuracy here than we could with Double. This isn't surprising, since we have more bits at our disposal, but accuracy is not the only benefit of this calculation. The result is also deterministic! This is practically impossible to guarantee with floating point number calculations across different platforms and architectures.

## Available solutions

A very common question people usually ask when a new library is being announced: "What is wrong with currently available solutions?". That is a perfectly reasonable question, which hopefully we have a compelling answer for.

We had a strong requirement for safety, correctness, and performance. Which is the combination that none of the available libraries in Haskell ecosystem could provide.

I will use Data.Fixed from base as an example and list some of limitations that prevented us from using it:

• Backed by Integer, which makes it slower than it should be for common cases.

• Truncation instead of some more useful rounding strategies.

>>> 5.39 :: Fixed E1
5.3
>>> 5.499999999999 :: Fixed E1
5.4

• No built-in protection against runtime exceptions:
>>> f = 5.49 :: Fixed E1
>>> f / 0
*** Exception: divide by zero

• There is a limited number of scaling types: E0, E1, E2, E3, E6, E9 and E12. It is possible to add new ones with HasResolution, but it is a bit inconvenient.

• No built-in ability to specify bounds. This means that there is no protection against things like negative values or going outside of artificially imposed limits.

Similar arguments can be applied to other libraries. Especially the objection regarding performance. This objection is not unfounded: our benchmarks have revealed performance issues of practical relevance with existing implementations.

## Conclusion

I encourage everyone who writes software for finance, blockchain and other areas that require exact precision and safety of calculations, to seriously consider all implications of choosing the wrong data type for representing their numeric values.

Haskell is a very safe language out of the box, but as you saw in this post, it does not offer the desired level of safety when it comes to operations on numeric values. Hopefully we were able to convince you, that, at least for decimal numbers, such safety can be achieved with safe-decimal library.

If you feel like this post describes problems that are familiar to you and you are looking for a solution, please reach out to us and we will be glad to help.

# Dhall Survey Results (2019-2020)

dhall-2020

The results are in for this year’s Dhall survey, which you can find here:

You might also want to compare to the summary of last year’s survey:

Note that I will not include all write-in responses, but you can consult the above links if you want to read them all. I will only highlight responses that are representative of clear themes in survey feedback. I am also omitting praise from this summary (which I greatly appreciate, though!), because I would like to draw attention to what needs improvement.

This year a greater number of survey respondents reported using Dhall at work:

Which option best describes your usage of Dhall:

• 9 (14.3%) - Never used Dhall
• 13 (20.6%) - Briefly tried Dhall
• 11 (17.5%) - Use Dhall (but only for personal projects)
• 11 (17.5%) - Use Dhall at work (but only me)
• 19 (30.2%) - Use Dhall at work along with coworkers

… compared to last year:

• 7 (11.7%) - Never used it
• 22 (36.7%) - Briefly tried it out
• 11 (18.3%) - Use it for my personal projects
• 19 (31.7%) - Use it at work
• 1 ( 1.7%) - Write in: Trying to convince work people to use it

… even though fewer people completed the survey this year (down from 73 responses to 64).

The most likely reason for the smaller number of responses was the greater length of the survey. This will also likely influence the distribution of responses since people more interested in Dhall will be more motivated to complete this year’s longer survey. Next year I will probably trim the survey down again in order to gather a larger sample size.

Even taking that into account, the number of respondents using Dhall at work grew in absolute terms.

This year’s survey added a new category to distinguish whether people using Dhall at work were doing so alone or alongside their coworkers. I was pleased to see that Dhall users do not appear to have difficulty persuading their coworkers to use Dhall.

What do you use Dhall for?

CI / CD / Ops (especially Kubernetes) continue to be the most prominent use cases for using Dhall:

Writing environment variables for configurations of containerized application

deployment configs and secrets management

Higher-level common configuration from which tool configuration is derived.

project configuration (CI, K8s, etc) & templating

SRE/DevOps related configuration

Kubernetes mostly + some glue config with AWS

Kubernetes config

Mostly for kubernetes cluster configuration.

Configuration of my custom build system

dhall-kubernetes

Concourse Pipelines

publish interfaces for Ansible roles to make their usage easier through Dhall based config

Simple configuration for Haskell app, prototype kubernetes cluster config

Generate the yaml passed to –values for helm. Some gitlab ci configuration.

Application server configuration, database replacement

Setting up Docker Compose files for parts of our product to be used for automatic testing.

Configuration of application in kubernetes

Kubernetes management and shared config (application and infrastructure)

Generating yaml and build files

CI setup (generate ansible yml files)

Built bitbucket pipeline typing. Also internal configuration for a pdf-filler application.

Ansible config

For configuration and build utility (combinded with Shake)

Package definitions for a Linux ports tree builder I was building

Customer-specific configuration files

My favorite response was:

Dhall is the configuration format for my company’s product

However, there were still a few responses that didn’t fall into one of the DevOps use cases:

Test data generation

Configuration of a personal chat bot, dhall-cabal,

Game data, templating LaTeX, canonical source of truth (upstream of JSON) for Elm apps.

dzen-dhall

Configuration for personal projects and generating DBC files at work

One surprising result was that only person wrote in Spago (PureScript’s package manager) as how they used Dhall:

PureScript through spago

… even though 7 survey respondents reported using Spago in a subsequent section! In fact, 5 of them wrote in completely different use cases for Dhall. This suggests a potential issue with the survey design: people might have chosen not to write in answers that were already covered by subsequent questions.

Other responses simply noted that Dhall is a programmable configuration language:

Adding a small amount of automation to config generation

Move non-turing-complete data out of config/programs, declarative programs that get interpreted as data

Configuration

Configuration

…configuration

configuration

Configuration generation

Generate yaml configuration files

configuration of programs written in haskell

## The pitch

One of the things we do periodically is refine our “pitch”, based on feedback from users (including, but not limited to, these surveys). This section covers reasons to use Dhall in users’ own words when they answered:

Why do you use Dhall?

One of the most interesting answers to me was from the same respondent who said that Dhall was the configuration format for their company’s product. In their own words:

Because YAML is horrible. Because having multiple configuration files for a single product is horrible and dhall is the best “big config file” format.

The above pitch must be pretty compelling for a company to embrace Dhall to that extent. In fact, the latter half of the pitch is fairly similar to the one currently used by dhall-lang.org, focusing on Dhall’s suitability for configuration “in the large”, although we’ve recently de-emphasized replacing YAML specifically. Several commercial users provided input into that branding (and one of them could have been the same person who left the above feedback).

Being a “big config file” format can imply many different virtues, but in my experience users usually mean the the following programming language features:

• Types (to reduce errors at scale)
• Functions (to avoid repeating one’s self)
• Imports (to have a single source of truth that one can import)
• Integrations (to unify disparate configuration formats)

… and several responses were related to these “scale”-related reasons to adopt:

Consistent typing across deployment configs and environments

strong types, functions and remote (decentralised) importing

Type safety, avoid writing yaml by hand

It allows us to provide a function via environment variable

Higher-level common configuration from which tool configuration is derived.

It replaced a lot of redundant JSON configs; adding new services is a lot quicker now, and cross-cutting changes less painful

To avoid copy&paste and the maintanance problems caused by it

Type safety, safe imports, no YAML indentation madness.

To help reduce code duplication and avoid errors in the values and keys of configuration filez. Mainly used it so far to help setup a more maintainable set of Kinect clusters and users for myself, which also made it easier to add.

It has a strong type system

Type-safety, the ability to move a lot of logic/formatting (e.g. in templating) into Dhall proper, the ability to represent ADTs/union types.

Abstraction!

Because i can abstract boilerplate

It provides a sane language for configuring systems (with types and abstraction)

Type checking, configuration typo safety, factorising yaml

Abstract useful config + defaults

Doesn’t have the weird gotchas of yaml. Possible to make DRY config files.

The main reason has been to decrease duplication.

type safety

I love the idea of strongly typed config, and also the programming aspect means there is less chance for error since you can reuse a component instead of copy/paste. I have been pushing for dhall at work as there have been very severe incidents at work from bad config files

Strong type safety

I like the ability to evaluate expressions, value substitution

Strong types and non-repetitive abstractions. Ease of publishing modules

our CI setup is now big and has lots of commonalities between projects. dhall helps us avoid preventable mistakes thanks to type checks and allows us to share common config with functions

it’s typed and avoids repetition

Not repeating common parts of configuration (like Kubernetes YAML hell)

Normally a general-purpose programming language can also do these things, and several respondents noted why they preferred Dhall over a general-purpose language:

It’s non-Turing-complete, but allows imports and static typing

It’s convenient and provides a good power-to-weight ratio

Only non-turing-complete config language that is based on a lambda calculus (and is modern/not a hobby project)

Because it won’t crash at runtime (totality)

It is by far the most well-wrought typed configuration language

I want to use something with proper type theory behind it and dhall is actually almost useful for some problems we have at work

It’s a nice config language

It’s quick, compact and type-safe

simple but effektive system f, i like that

Others explained the motivation in terms of the current solution they were struggling with and trying to replace (commonly YAML):

To make sense of this pile of configuration mess

Verifying and reading through our combo of custom scripts, chef, and other solutions was a nightmare

so I don’t need to create my own configuration language and other existing ones suck more (json, yaml, that weird python thing…)

Because YAML and database were not the choices

For all the benefits over YAML

better helm-templates

## Documentation

This year we asked people what needed better documentation to see if there were any blind spots in our current coverage:

What needs better documentation?

Like last year, people most commonly requested documentation on best practices and design patterns:

Recursive things. (This has been discussed on Slack somewhat.)

Packaging

I’d love a manual on best practices and a contributing guide to the Haskell implementation

Cookbooks

The prelude/standard library, or rather, surfacing documentation centrally in browsable form

How to create defaults, for records with optional values. How to design types with backward compatibility in mind.

Nested record updating, though the blog post shows some improvement here with the default types :: syntax.

How to make use of dhall at the peripheries of a large project.

How to deal with generated files (!), e.g. CI yaml config; how to include dhall projects into nix configs (dhallToNix); best practices for pre-caching imports

Imports section could be a bit more extensive like import prelude, import private repos (GitHub, BitBucket), multi imports. currently the information is scattered around various pages.

Perhaps patterns. For example, we have { a :: ty1, b:: Maybe ty2} and want users to be able to write { a = val } without b = None or default \ { a = val }

(this is probably because I didn’t google it properly), how to properly deal with freezing and updating hashes

Best practices and real-world examples. I’d love to use something like Dhall for managing configuration at work, but it’s very hard to tell if it will handle my usecases well, and it’s hard to dive in and try it out because I don’t know if I’m doing it right

We have made progress on that front in two forms:

• This manual was created as a series of how-to guides for common tasks and idioms related to the language. Feedback from these surveys helps inform what topics I choose for each chapter.

• Some design patterns became language features

The most notable example this year is standardizing support for the record completion operator for better handling of defaults

In fact, several improvements year (and some currently in progress) are directly inspired by my work on the book. Any time I describe a workflow that seems too painful I make changes to the language, tooling, or ecosystem to smooth things over.

Besides the book, the thing most likely to improve over the coming year is packaging, documenting, and discovering new Dhall packages. For example, I just created a Google Summer of Code project proposal for a student work on a documentation generator for Dhall packages:

The second most common request was to improve documentation for the core language features:

Still not sure how the merge keyword works.

i somehow have trouble finding the doc page about record operations and iirc the record projection thing where you take the a subset of a record’s field is not included in the page listing records operations

Importing of other files

The introduction to FP. Lots of devs work in Go, Ruby, etc and need help thinking about polymorphism with sum types. Also more clarification on the let...in syntax in early guides and an explanation on why you need :let in the repl.

I tend to have a hard time finding comprehensive info on syntactic features and end up hearing about them on guthub issues

Common errors like “-/+” syntax, “not a function”

the type system

This is understandable because the closest thing Dhall has to a complete resource on this is the Haskell implementation’s tutorial:

One of my short-term goals is to translate this into a language-independent tutorial.

There is an existing language-independent tutorial for translating Dhall to JSON:

… but that doesn’t cover all of the language features like the Haskell tutorial does.

## Language bindings

Which language bindings do you currently use?

• 37 (84.1%) - Haskell
• 6 (13.6%) - Bash
• 5 (11.4%) - Nix
• 3 ( 6.8%) - Ruby
• 2 ( 4.5%) - Rust
• 1 ( 2.3%) - Golang
• 1 ( 2.3%) - Swift
• 0 ( 0.0%) - Clojure
• 0 ( 0.0%) - Eta
• 0 ( 0.0%) - Java (via Eta)

The number of Haskell users is not surprising given that the Haskell implementation also powers the shared command-line tools, like dhall/dhall-to-{json,yaml}/dhall-lsp-server. Many Dhall users do not use Haskell the language and instead use the Haskell implementation to generate JSON or YAML from Dhall while waiting for a language binding for their preferred language.

I think the one response for Swift might have been a mistaken answer intended for the next section. As far as I know there currently are not Dhall bindings to Swift (not even ones in progress).

One of the interesting take-aways from the above question is that the JVM is one of the areas where Dhall is not being used as a native language binding despite existing bindings. I get the impression that most JVM users are waiting for Java/Scala bindings.

## Desired language bindings

Which language bindings would you like to see get more attention?

• 17 (39.5%) - Python
• 12 (27.9%) - Scala
• 11 (25.6%) - PureScript
• 9 (20.9%) - JavaScript
• 7 (16.4%) - Go
• 7 (16.3%) - Java
• 3 ( 7.0%) - C++
• 3 ( 7.0%) - Elm
• 2 ( 4.7%) - C#
• 2 ( 4.7%) - Kotlin
• 2 ( 4.7%) - Rust
• 1 ( 2.3%) - Swift
• 1 ( 2.3%) - TypeScript
• 1 ( 2.3%) - PHP
• 1 ( 2.3%) - Perl
• 1 ( 2.3%) - C
• 1 ( 2.3%) - A C/Rust library so all the other langs can bind to

Python is an interesting response because at one point there was progress on a Python binding to Dhall, but that stalled out.

The demand for Python makes sense because Python is used heavily in Dhall’s primary use case (CI / CD / Ops), alongside Go. In fact, Go was listed as well, although possibly not mentioned as often due to the Go binding to Dhall being far closer to completion.

In fact, the Go binding just announced the first release candidate for version 1.0.0:

Note that survey respondents preferred bindings in functional languages over their more widely used imperative counterparts. For example, there was greater demand for Scala compared to Java and greater demand for PureScript compared to JavaScript. This might owe to Dhall’s functional programming heritage.

A few survey respondents appear to not be aware that there is a complete Rust binding to Dhall now available. This is understandable, though, given that the Rust binding only officially announced recently.

## Integrations

Which of the following integrations do you use?

• 26 (63.4%) - JSON (via dhall-to-json)
• 25 (61.0%) - YAML (via dhall-to-yaml)
• 10 (24.4%) - Kubernetes (via dhall-to-kubernetes)
• 7 (17.1%) - JSON (via Prelude.JSON.render)
• 7 (17.1%) - purescript-packages (via spago)
• 5 (12.2%) - YAML (via Prelude.JSON.renderYAML)
• 3 ( 7.3%) - Cabal (via dhall-to-cabal)
• 1 ( 2.4%) - Write-in: Nix (via dhall-to-nix)
• 0 ( 0.0%) - XML (via dhall-to-xml)
• 0 ( 0.0%) - XML (via Prelude.XML.render)
• 0 ( 0.0%) - TOML (via JSON)

The thing I take away from the above numbers is that a large number of people would still benefit from language bindings and they currently work around the absence of a language binding by generating JSON/YAML.

## Desired integrations

Which integrations would you like to see get more attention?

• 22 (68.8%) - Terraform
• 11 (34.4%) - Docker Compose
• 9 (28.1%) - HCL
• 8 (25.0%) - Prometheus
• 4 (12.5%) - Packer
• 2 ( 6.3%) - INI
• 2 ( 6.3%) - Concourse
• 2 ( 3.1%) - Write-in: Ansible
• 1 ( 3.1%) - GoCD
• 1 ( 3.1%) - Write-in: Grafana
• 1 ( 3.1%) - Write-in: Nix, Nixops
• 1 ( 3.1%) - Write-in: Dockerfile
• 1 ( 3.1%) - Write-in: Google Cloud Builder
• 1 ( 3.1%) - Write-in: Travis
• 1 ( 3.1%) - Write-in: Vault
• 1 ( 3.1%) - Write-in: GitHub Actions
• 1 ( 3.1%) - Write-in: Drone CI
• 1 ( 3.1%) - Write-in: CloudFormation
• 1 ( 3.1%) - Write-in: Jenkins
• 1 ( 3.1%) - Write-in: TOML
• 1 ( 3.1%) - Write-in: Bitbucket pipelines

Terraform was far and away the most requested integration. One of the interesting challenges about this potential integration is figuring out what is the right way to integrate Dhall with Terraform because Terraform has its own programming features (like a DSL for defining function-like modules).

## Dhall packages

Which of the following Dhall packages do you use?

• 32 (100.0%) - Prelude
• 4 ( 12.5%) - dhall-packages (Dhall monorepo)
• 1 ( 3.1%) - hpack-dhall (hpack bindings)
• 1 ( 3.1%) - github-actions-dhall (GitHub Actions bindings)
• 1 ( 3.1%) - dhall-terraform (Terraform bindings)
• 1 ( 3.1%) - dhall-semver (Semantic versions)
• 1 ( 3.1%) - dhall-concourse (Concourse bindings)
• 1 ( 3.1%) - dhall-bhat (Haskell type classes in Dhall)
• 1 ( 3.1%) - dada (Recursion schemes)
• 0 ( 0.0%) - dho (CircleCI bindings)
• 0 ( 0.0%) - dhallql (Query language)
• 0 ( 0.0%) - dhallia (Dhall as an IDL)
• 0 ( 0.0%) - cpkg (C package manager)
• 0 ( 0.0%) - caterwaul (Category theory)

Unsurprisingly, most people use the Prelude. The thing that did catch my eye was how many respondents used dhall-packages (mainly because I hadn’t realized how fast it had grown since the last time I checked it out).

dhall-packages appears to have the potential to grow into the Dhall analog of Helm, meaning a repository containing useful types and predefined recipes for deploying Kubernetes services. I can see this repository easily giving Dhall a competitive edge in the Kubernetes space since I know quite a few people are looking for a Helm alternative without the headaches associated with templating YAML.

## ASCII vs Unicode

dhall format tries to be as opinionated as possible, but currently permits one way to customize behavior: the tool can either emit Unicode symbols (e.g. λ, →, ∀) or ASCII symbols (e.g. \, ->, forall).

I asked several questions about ASCII versus Unicode to see if there was a possibility of standardizing on one or the other. Unfortunately, people were split in this regard. The only thing they agreed upon was that they preferred not to input Unicode symbols directly:

Do you prefer to input ASCII or Unicode symbols in your editor (before formatting the code)?

• 58 (93.5%) - ASCII
• 4 ( 6.5%) - Unicode

… but on the other two questions people split pretty evenly, with a slight preference for Unicode:

How do you format your code?

• 24 (42.9%) - Unicode
• 22 (39.3%) - ASCII
• 6 (10.7%) - I don’t format my code
• 4 ( 7.3%) - Other

Do you prefer to read Dhall code that uses ASCII or Unicode symbols?

• 28 (50.9%) - Unicode
• 25 (45.5%) - ASCII
• 2 ( 3.6%) - Other

What’s interesting is that you get clearer preferences when you slice the data by how much people use Dhall.

For example, people who have never used Dhall or briefly tried Dhall prefer ASCII by roughly a 3-to-1 margin:

How do you format your code?

• 10 (66.7%) - ASCII
• 3 (20.0%) - I don’t format my code
• 2 (13.3%) - Unicode

Do you prefer to read Dhall code that uses ASCII or Unicode symbols?

• 12 (75.0%) - ASCII
• 4 (25.0%) - Unicode

… whereas other categories (e.g. personal projects or work) prefer Unicode by roughly a 2-to-1 margin:

How do you format your code?

• 22 (55.0%) - Unicode
• 11 (27.5%) - ASCII
• 4 (10.0%) - Other
• 3 ( 7.5%) - I don’t format my code

Do you prefer to read Dhall code that uses ASCII or Unicode symbols?

• 23 (60.5%) - Unicode
• 13 (34.2%) - ASCII
• 2 ( 5.3%) - Other

There are several possible ways to interpret that evidence:

• Perhaps Dhall could expand its potential audience by formatting ASCII

• Perhaps people prefer the Unicode syntax the more they use Dhall

• Perhaps there is a “founder effect” since originally dhall format only supported Unicode

Either way, I don’t plan on making any changes to dhall format immediately, but I will use this data to inform future formatting discussions on Discourse.

One person also added Unicode-related feedback in the “Other feedback” section:

I feel strongly that unicode symbols are not worth supporting indefinitely, as they typically can’t be typed and add mental overhead when reading. The symbols themselves also have a mathematical bent, which can be intimidating for those not well versed in math / logic programming.

## Growth

Would anything encourage you to use Dhall more often?

Language bindings:

I wish there was a good way to do nix through dhall, but I don’t have any good suggestions.

something like the dhall Haskell library for purescript

Js/purescript bindings in release state. …

Scala/JVM bindings (Eta is suboptimal & unmaintained) …

python bindings

Successfully getting my work on board with it (which means having a perfect golang integration)

Getting more bindings for things I use

better docs; bindings for the JVM languages

Bindings on other languages (hard to get people to contribute on a project using the Haskell impl)

Better Python support, so I can sell it to colleagues who use Python.

Better (documented) language bindings.

Packages/Integrations:

I’d love to use it in more places, eg to replace all of our terraform code or CI configuration; currently those integrations just aren’t there and I don’t have time to bridge the gap

More libraries / packages. I think dhall needs a richer ecosystem. In particular i’d love complete terraform bindings and more kubernetes packages (a la helm)

First class Kubernetes, Terraform, Vault, Ansible integration.

When I looked at Dhall most recently, it wasn’t obvious to me that an ecosystem of packages was springing up around it. Might want to add a link (or a more prominent one if there is already one and I missed it).

More packages (like dhall-kubernetes)

… Also, better Terraform/HCL bindings, Ansible integration, or integrations for common CI systems (Jenkins/jenkins-job-builder, CircleCI, GoCD, etc)

… more integrations (I would like to be able to configure EVERYTHING using Dhall ^^)

Tooling:

Better performance

better (especially more concise) Error messages

structural editor with auto-completion based on symbols in scope and with automatic let-floating

Speed and ergonomics of the Emacs integration. Currently it’s terribly slow to type check.

Language features:

Ability to import YAML without yaml-to-dhall

Usage unicode chars in imported filenames without quotes

… Also Text/Double operations and possibly comparisons. I understand and agree with the reasons this hasn’t been done, but finding a way to do this without compromising the goal of the language would add so much potential

Formatting improvements:

Vonderhaar-style dhall format. Long multi-line literals with interpolations become really illegible currently. More built-ins. Text becoming non-opaque. The ability to expose multiple things from a record (this is currently being discussed).

Ultimately it was the autoformatter/idomatic formatting of Dhall that turned me off. I wanted my package format to be clear and readable to people new to my project but the idomatic was very messy in my eyes. Here’s a comparison of Dhall and the TCL inspired pkg definition format I came up with: https://gist.github.com/wezm/dfdce829964c410e2c521aa3ca132ddd

Social proof:

Popularity

Popularity is the big one so I could get away with it more at work. …

Documentation:

… Better docs to educate team members with.

Better starter documentation. maybe also how to put in place Dhall build in CI?

Other unassorted responses:

Hard to describe in one line, but an easier way to get values in and out of larger dhall projects (github issue 676 being a symptom of this)

For work maybe a convincing reason to use it in place of yaml. For personal, making it easier to support backward compatiblity in types.

The different type hack for multiple resources in a single Kubernetes file made me drop it, I could’ve never recommended it to my coworkers over worse (as in, worse is better) templating solutions.

Issue #1521

Generation of Dhall types from Haskell types; Easier extension with own functions

A nice bidirectional typechecker for less type annotation burden

## Contributions

One thing I checked is if there were any barriers to adoption that we were not aware of:

Would anything encourage you to contribute to the Dhall ecosystem more often?

There were not many common themes that stood out, so I’ll include the responses verbatim:

Maybe some links to resources on how to get started with programming theory (eg. info on type notation for the standard, parser, “compiler”, etc). Basically I feel I probably just need to learn more, but I’m not entirely sure what.

Nope, as a new contributor this year, the Dhall community has been an absolute delight to start contributing to.

better reporting for missing env vars (not one by one)

better personal usecases

No - I already want to do a lot more!

A contributing guide to the haskell implementation

Linked Haskell tutorials and perhaps partial (or early) implementations of dhall features

Perhaps simplified core language? I sometimes think that the language is large and difficult, especially around “safety guarantees” features, and it might make it harder to develop a new language binding.

Using it at work

I found it difficult to generate dhall-kubernetes bindings

The ecosystem seems to be very contributor-friendly, but I don’t have enough time at the moment.

Getting standards for repo layout, documentation, discovery

… and my favorite response was:

There’s a ‘good first issue’ label that’s attached to no issues.

## Conclusion

The main thing I concluded from the survey feedback is that people want us to focus on ease of integration, especially language bindings, and especially Python language bindings.

A major change from last year’s feedback was the dramatic drop in requests for better documentation / examples / use cases. People appear to understand the motivation for the language and how to use Dhall to solve their problems and now they care more about streamlining the integration process as much as possible.

If this is your first time hearing about the survey you can still complete the survey:

I receive e-mail notifications when people complete the survey so I will be aware of any feedback you provide this way. Also, completing the survey will let you browse the survey results more easily.

Before concluding this post, I would like to highlight that this year’s survey used approval voting to let people select their preferred language bindings. If you’re a person frustrated with a political system based on first-past-the-post voting, I encourage you to research approval voting as an alternative voting method with a higher power-to-weight ratio (simpler than ranked choice voting and produces better outcomes).