Planet Haskell

January 19, 2019

Haskell at Work

Purely Functional GTK+, Part 2: TodoMVC

Purely Functional GTK+, Part 2: TodoMVC

In the last episode we built a "Hello, World" application using gi-gtk-declarative. It's now time to convert it into a to-do list application, in the style of TodoMVC.

To convert the “Hello, World!” application to a to-do list application, we begin by adjusting our data types. The Todo data type represents a single item, with a Text field for its name. We also need to import the Text type from Data.Text.

data Todo = Todo
  { name :: Text
  }

Our state will no longer be (), but a data types holding Vector of Todo items. This means we also need to import Vector from Data.Vector.

data State = State
  { todos :: Vector Todo
  }

As the run function returns the last state value of the state reducer loop, we need to discard that return value in main. We wrap the run action in void, imported from Control.Monad.

Let’s rewrite our view function. We change the title to “TodoGTK+” and replace the label with a todoList, which we’ll define in a where binding. We use container to declare a Gtk.Box, with vertical orientation, containing all the to-do items. Using fmap and a typed hole, we see that we need a function Todo -> BoxChild Event.

view' :: State -> AppView Gtk.Window Event
view' s = bin
  Gtk.Window
  [#title := "TodoGTK+", on #deleteEvent (const (True, Closed))]
  todoList
  where
    todoList = container Gtk.Box
                         [#orientation := Gtk.OrientationVertical]
                         (fmap _ (todos s))

The todoItem will render a Todo value as a Gtk.Label displaying the name.

view' :: State -> AppView Gtk.Window Event
view' s = bin
  Gtk.Window
  [#title := "TodoGTK+", on #deleteEvent (const (True, Closed))]
  todoList
  where
    todoList = container Gtk.Box
                         [#orientation := Gtk.OrientationVertical]
                         (fmap todoItem (todos s))
    todoItem todo = widget Gtk.Label [#label := name todo]

Now, GHC tells us there’s a “non-type variable argument in the constraint”. The type of todoList requires us to add the FlexibleContexts language extension.

{-# LANGUAGE FlexibleContexts  #-}
{-# LANGUAGE OverloadedLabels  #-}
{-# LANGUAGE OverloadedLists   #-}
{-# LANGUAGE OverloadedStrings #-}
module Main where

The remaining type error is in the definition of main, where the initial state cannot be a () value. We construct a State value with an empty vector.

main :: IO ()
main = void $ run App
  { view         = view'
  , update       = update'
  , inputs       = []
  , initialState = State {todos = mempty}
  }

Adding New To-Do Items

While our application type-checks and runs, there are no to-do items to display, and there’s no way of adding new ones. We need to implement a form, where the user inserts text and hits the Enter key to add a new to-do item. To represent these events, we’ll add two new constructors to our Event type.

data Event
  = TodoTextChanged Text
  | TodoSubmitted
  | Closed

TodoTextChanged will be emitted each time the text in the form changes, carrying the current text value. The TodoSubmitted event will be emitted when the user hits Enter.

When the to-do item is submitted, we need to know the current text to use, so we add a currentText field to the state type.

data State = State
  { todos       :: Vector Todo
  , currentText :: Text
  }

We modify the initialState value to include an empty Text value.

main :: IO ()
main = void $ run App
  { view         = view'
  , update       = update'
  , inputs       = []
  , initialState = State {todos = mempty, currentText = mempty}
  }

Now, let’s add the form. We wrap our todoList in a vertical box, containing the todoList and a newTodoForm widget.

view' :: State -> AppView Gtk.Window Event
view' s = bin
  Gtk.Window
  [#title := "TodoGTK+", on #deleteEvent (const (True, Closed))]
  (container Gtk.Box
             [#orientation := Gtk.OrientationVertical]
             [todoList, newTodoForm]
  )
  where
    ...

The form consists of a Gtk.Entry widget, with the currentText of our state as its text value. The placeholder text will be shown when the entry isn’t focused. We use onM to attach an effectful event handler to the changed signal.

view' :: State -> AppView Gtk.Window Event
view' s = bin
  Gtk.Window
  [#title := "TodoGTK+", on #deleteEvent (const (True, Closed))]
  (container Gtk.Box
             [#orientation := Gtk.OrientationVertical]
             [todoList, newTodoForm]
  )
  where
    ...
    newTodoForm = widget
      Gtk.Entry
      [ #text := currentText s
      , #placeholderText := "What needs to be done?"
      , onM #changed _
      ]

The typed hole tells us we need a function Gtk.Entry -> IO Event. The reason we use onM is to have that IO action returning the event, instead of having a pure function. We need it to query the underlying GTK+ widget for it’s current text value. By using entryGetText, and mapping our event constructor over that IO action, we get a function of the correct type.

    ...
    newTodoForm = widget
      Gtk.Entry
      [ #text := currentText s
      , #placeholderText := "What needs to be done?"
      , onM #changed (fmap TodoTextChanged . Gtk.entryGetText)
      ]

It is often necessary to use onM and effectful GTK+ operations in event handlers, as the callback type signatures rarely have enough information in their arguments. But for the next event, TodoSubmitted, we don’t need any more information, and we can use on to declare a pure event handler for the activated signal.

    ...
    newTodoForm = widget
      Gtk.Entry
      [ #text := currentText s
      , #placeholderText := "What needs to be done?"
      , onM #changed (fmap TodoTextChanged . Gtk.entryGetText)
      , on #activate TodoSubmitted
      ]

Moving to the next warning, we see that the update' function is no longer total. We are missing cases for our new events. Let’s give the arguments names and pattern match on the event. The case for Closed will be the same as before.

update' :: State -> Event -> Transition State Event
update' s e = case e of
  Closed -> Exit

When the to-do text value changes, we’ll update the currentText state using a Transition. The first argument is the new state, and the second argument is an action of type IO (Maybe Event). We don’t want to emit any new event, so we use (pure Nothing).

update' :: State -> Event -> Transition State Event
update' s e = case e of
  TodoTextChanged t -> Transition s { currentText = t } (pure Nothing)
  Closed -> Exit

For the TodoSubmitted event, we define a newTodo value with the currentText as its name, and transition to a new state with the newTodo item appended to the todos vector. We also reset the currentText to be empty.

To use Vector.snoc, we need to add a qualified import.

import           Control.Monad                 (void)
import           Data.Text                     (Text)
import           Data.Vector                   (Vector)
import qualified Data.Vector                   as Vector
import qualified GI.Gtk                        as Gtk
import           GI.Gtk.Declarative
import           GI.Gtk.Declarative.App.Simple

Running the application, we can start adding to-do items.

Improving the Layout

Our application doesn’t look very good yet, so let’s improve the layout a bit. We’ll begin by left-aligning the to-do items.

todoItem i todo =
  widget
    Gtk.Label
    [#label := name todo, #halign := Gtk.AlignStart]

To push the form down to the bottom of the window, we’ll wrap the todoList in a BoxChild, and override the defaultBoxChildProperties to have the child widget expand and fill all the available space of the box.

todoList =
  BoxChild defaultBoxChildProperties { expand = True, fill = True }
    $ container Gtk.Box
                [#orientation := Gtk.OrientationVertical]
                (fmap todoItem (todos s))

We re-run the application, and see it has a nicer layout.

Completing To-Do Items

There’s one very important missing: being able to mark a to-do item as completed. We add a Bool field called completed to the Todo data type.

data Todo = Todo
  { name      :: Text
  , completed :: Bool
  }

When creating new items, we set it to False.

update' :: State -> Event -> Transition State Event
update' s e = case e of
  ...
  TodoSubmitted ->
    let newTodo = Todo {name = currentText s, completed = False}
    in  Transition
          s { todos = todos s `Vector.snoc` newTodo, currentText = mempty }
          (pure Nothing)
  ...

Instead of simply rendering the name, we’ll use strike-through markup if the item is completed. We define completedMarkup, and using guards we’ll either render the new markup or render the plain name. To make it strike-through, we wrap the text value in <s> tags.

widget
  Gtk.Label
    [ #label := completedMarkup todo
    , #halign := Gtk.AlignStart
    ]
  where
    completedMarkup todo
      | completed todo = "<s>" <> name todo <> "</s>"
      | otherwise      = name todo

For this to work, we need to enable markup for the label be setting #useMarkup to True.

widget
  Gtk.Label
    [ #label := completedMarkup todo
    , #useMarkup := True
    , #halign := Gtk.AlignStart
    ]
  where
    completedMarkup todo
      | completed todo = "<s>" <> name todo <> "</s>"
      | otherwise      = name todo

In order for the user to be able to toggle the completed status, we wrap the label in a Gtk.CheckButton bin. The #active property will be set to the current completed status of the Todo value. When the check button is toggled, we want to emit a new event called TodoToggled.

todoItem todo =
  bin Gtk.CheckButton
      [#active := completed todo, on #toggled (TodoToggled i)]
    $ widget
        Gtk.Label
        [ #label := completedMarkup todo
        , #useMarkup := True
        , #halign := Gtk.AlignStart
        ]

Let’s add the new constructor to the Event data type. It will carry the index of the to-do item.

data Event
  = TodoTextChanged Text
  | TodoSubmitted
  | TodoToggled Int
  | Closed

To get the corresponding index of each Todo value, we’ll iterate using Vector.imap instead of using fmap.

    todoList =
      BoxChild defaultBoxChildProperties { expand = True, fill = True }
        $ container Gtk.Box
                    [#orientation := Gtk.OrientationVertical]
                    (Vector.imap todoItem (todos s))
    todoItem i todo =
      ...

The pattern match on events in the update' function is now missing a case for the new event constructor. Again, we’ll do a transition where we update the todos somehow.

update' :: State -> Event -> Transition State Event
update' s e = case e of
  ...
  TodoToggled i -> Transition s { todos = _ (todos s) } (pure Nothing)
  ...

We need a function Vector Todo -> Vector Todo that modifies the value at the index i. There’s no handy function like that available in the vector package, so we’ll create our own. Let’s call it mapAt.

update' :: State -> Event -> Transition State Event
update' s e = case e of
  ...
  TodoToggled i -> Transition s { todos = mapAt i _ (todos s) } (pure Nothing)
  ...

It will take as arguments the index, a mapping function, and a Vector a, and return a Vector a.

mapAt :: Int -> (a -> a) -> Vector a -> Vector a

We implement it using Vector.modify, and actions on the mutable representation of the vector. We overwrite the value at i with the result of mapping f over the existing value at i.

mapAt :: Int -> (a -> a) -> Vector a -> Vector a
mapAt i f = Vector.modify (\v -> MVector.write v i . f =<< MVector.read v i)

To use mutable vector operations through the MVector name, we add the qualified import.

import qualified Data.Vector.Mutable           as MVector

Finally, we implement the function to map, called toggleComplete.

toggleCompleted :: Todo -> Todo
toggleCompleted todo = todo { completed = not (completed todo) }

update' :: State -> Event -> Transition State Event
update' s e = case e of
  ...
  TodoToggled i -> Transition s { todos = mapAt i toggleComplete (todos s) } (pure Nothing)
  ...

Now, we run our application, add some to-do items, and mark or unmark them as completed. We’re done!

Learning More

Building our to-do list application, we have learned the basics of gi-gtk-declarative and the “App.Simple” architecture. There’s more to learn, though, and I recommend checking out the project documentation. There are also a bunch of examples in the Git repository.

Please note that this project is very young, and that APIs are not necessarily stable yet. I think, however, that it’s a much nicer way to build GTK+ applications using Haskell than the underlying APIs provided by the auto-generated bindings.

Now, have fun building your own functional GTK+ applications!

by Oskar Wickström at January 19, 2019 12:00 AM

April 16, 2019

Oskar Wickström

Property-Based Testing in a Screencast Editor, Case Study 2: Video Scene Classification

In the last case study on property-based testing (PBT) in Komposition we looked at timeline flattening. This post covers the video classifier, how it was tested before, and the bugs I found when I wrote property tests for it.

If you haven’t read the introduction or the first case study yet, I recommend checking them out!

Classifying Scenes in Imported Video

Komposition can automatically classify scenes when importing video files. This is a central productivity feature in the application, effectively cutting recorded screencast material automatically, letting the user focus on arranging the scenes of their screencast. Scenes are segments that are considered moving, as opposed to still segments:

  • A still segment is a sequence of at least \(S\) seconds of near-equal frames
  • A moving segment is a sequence of non-equal frames, or a sequence of near-equal frames with a duration less than \(S\)

\(S\) is a preconfigured minimum still segment duration in Komposition. In the future it might be configurable from the user interface, but for now it’s hard-coded.

Equality of two frames \(f_1\) and \(f_2\) is defined as a function \(E(f_1, f_2)\), described informally as:

  • comparing corresponding pixel color values of \(f_1\) and \(f_2\), with a small epsilon for tolerance of color variation, and
  • deciding two frames equal when at least 99% of corresponding pixel pairs are considered equal.

In addition to the rules stated above, there are two edge cases:

  1. The first segment is always a considered a moving segment (even if it’s just a single frame)
  2. The last segment may be a still segment with a duration less than \(S\)

The second edge case is not what I would call a desirable feature, but rather a shortcoming due to the classifier not doing any type of backtracking. This could be changed in the future.

Manually Testing the Classifier

The first version of the video classifier had no property tests. Instead, I wrote what I thought was a decent classifier algorithm, mostly messing around with various pixel buffer representations and parallel processing to achieve acceptable performance.

The only type of testing I had available, except for general use of the application, was a color-tinting utility. This was a separate program using the same classifier algorithm. It took as input a video file, and produced as output a video file where each frame was tinted green or red, for moving and still frames, respectively.

Video classification shown with color tinting

In the recording above you see the color-tinted output video based on a recent version of the classifier. It classifies moving and still segments rather accurately. Before I wrote property tests and fixed the bugs that I found, it did not look so pretty, flipping back and forth at seemingly random places.

At first, debugging the classifier with the color-tinting tool way seemed like a creative and powerful technique. But the feedback loop was horrible, having to record video, process it using the slow color-tinting program, and inspecting it by eye. In hindsight, I can conclude that PBT is far more effective for testing the classifier.

Video Classification Properties

Figuring out how to write property tests for video classification wasn’t obvious to me. It’s not uncommon in example-based testing that tests end up mirroring the structure, and even the full implementation complexity, of the system under test. The same can happen in property-based testing.

With some complex systems it’s very hard to describe the correctness as a relation between any valid input and the system’s observed output. The video classifier is one such case. How do I decide if an output classification is correct for a specific input, without reimplementing the classification itself in my tests?

The other way around is easy, though! If I have a classification, I can convert that into video frames. Thus, the solution to the testing problem is to not generate the input, but instead generate the expected output. Hillel Wayne calls this technique “oracle generators” in his recent article.1

The classifier property tests generate high-level representations of the expected classification output, which are lists of values describing the type and duration of segments.

A generated sequence of expected classified segments
A generated sequence of expected classified segments

Next, the list of output segments is converted into a sequence of actual frames. Frames are two-dimensional arrays of RGB pixel values. The conversion is simple:

  • Moving segments are converted to a sequence of alternating frames, flipping between all gray and all white pixels
  • Still frames are converted to a sequence of frames containing all black pixels

The example sequence in the diagram above, when converted to pixel frames with a frame rate of 10 FPS, can be visualized like in the following diagram, where each thin rectangle represents a frame:

Pixel frames derived from a sequence of expected classified output segments
Pixel frames derived from a sequence of expected classified output segments

By generating high-level output and converting it to pixel frames, I have input to feed the classifier with, and I know what output it should produce. Writing effective property tests then comes down to writing generators that produce valid output, according to the specification of the classifier. In this post I’ll show two such property tests.

Testing Still Segment Minimum Length

As stated in the beginning of this post, classified still segments must have a duration greater than or equal to \(S\), where \(S\) is the minimum still segment duration used as a parameter for the classifier. The first property test we’ll look at asserts that this invariant holds for all classification output.

hprop_classifies_still_segments_of_min_length = property $ do

  -- 1. Generate a minimum still segment length/duration
  minStillSegmentFrames <- forAll $ Gen.int (Range.linear 2 (2 * frameRate))
  let minStillSegmentTime = frameCountDuration minStillSegmentFrames

  -- 2. Generate output segments
  segments <- forAll $
    genSegments (Range.linear 1 10)
                (Range.linear 1
                              (minStillSegmentFrames * 2))
                (Range.linear minStillSegmentFrames
                              (minStillSegmentFrames * 2))
                resolution

  -- 3. Convert test segments to actual pixel frames
  let pixelFrames = testSegmentsToPixelFrames segments

  -- 4. Run the classifier on the pixel frames
  let counted = classifyMovement minStillSegmentTime (Pipes.each pixelFrames)
                & Pipes.toList
                & countSegments

  -- 5. Sanity check
  countTestSegmentFrames segments === totalClassifiedFrames counted

  -- 6. Ignore last segment and verify all other segments
  case initMay counted of
    Just rest ->
      traverse_ (assertStillLengthAtLeast minStillSegmentTime) rest
    Nothing -> success
  where
    resolution = 10 :. 10

This chunk of test code is pretty busy, and it’s using a few helper functions that I’m not going to bore you with. At a high level, this test:

  1. Generates a minimum still segment duration, based on a minimum frame count (let’s call it \(n\)) in the range \([2, 20]\). The classifier currently requires that \(n \geq 2\), hence the lower bound. The upper bound of 20 frames is an arbitrary number that I’ve chosen.
  2. Generates valid output segments using the custom generator genSegments, where
    • moving segments have a frame count in \([1, 2n]\), and
    • still segments have a frame count in \([n, 2n]\).
  3. Converts the generated output segments to actual pixel frames. This is done using a helper function that returns a list of alternating gray and white frames, or all black frames, as described earlier.
  4. Count the number of consecutive frames within each segment, producing a list like [Moving 18, Still 5, Moving 12, Still 30].
  5. Performs a sanity check that the number of frames in the generated expected output is equal to the number of frames in the classified output. The classifier must not lose or duplicate frames.
  6. Drops the last classified segment, which according to the specification can have a frame count less than \(n\), and asserts that all other still segments have a frame count greater than or equal to \(n\).

Let’s run some tests.

> :{
| hprop_classifies_still_segments_of_min_length
|   & Hedgehog.withTests 10000
|   & Hedgehog.check
| :}
  ✓ <interactive> passed 10000 tests.

Cool, it looks like it’s working.

Sidetrack: Why generate the output?

Now, you might wonder why I generate output segments first, and then convert to pixel frames. Why not generate random pixel frames to begin with? The property test above only checks that the still segments are long enough!

The benefit of generating valid output becomes clearer in the next property test, where I use it as the expected output of the classifier. Converting the output to a sequence of pixel frames is easy, and I don’t have to state any complex relation between the input and output in my property. When using oracle generators, the assertions can often be plain equality checks on generated and actual output.

But there’s benefit in using the same oracle generator for the “minimum still segment length” property, even if it’s more subtle. By generating valid output and converting to pixel frames, I can generate inputs that cover the edge cases of the system under test. Using property test statistics and coverage checks, I could inspect coverage, and even fail test runs where the generators don’t hit enough of the cases I’m interested in.2

Had I generated random sequences of pixel frames, then perhaps the majority of the generated examples would only produce moving segments. I could tweak the generator to get closer to either moving or still frames, within some distribution, but wouldn’t that just be a variation of generating valid scenes? It would be worse, in fact. I wouldn’t then be reusing existing generators, and I wouldn’t have a high-level representation that I could easily convert from and compare with in assertions.

Testing Moving Segment Time Spans

The second property states that the classified moving segments must start and end at the same timestamps as the moving segments in the generated output. Compared to the previous property, the relation between generated output and actual classified output is stronger.

hprop_classifies_same_scenes_as_input = property $ do
  -- 1. Generate a minimum still still segment duration
  minStillSegmentFrames <- forAll $ Gen.int (Range.linear 2 (2 * frameRate))
  let minStillSegmentTime = frameCountDuration minStillSegmentFrames

  -- 2. Generate test segments
  segments <- forAll $ genSegments (Range.linear 1 10)
                                   (Range.linear 1
                                                 (minStillSegmentFrames * 2))
                                   (Range.linear minStillSegmentFrames
                                                 (minStillSegmentFrames * 2))
                                   resolution

  -- 3. Convert test segments to actual pixel frames
  let pixelFrames = testSegmentsToPixelFrames segments

  -- 4. Convert expected output segments to a list of expected time spans
  --    and the full duration
  let durations = map segmentWithDuration segments
      expectedSegments = movingSceneTimeSpans durations
      fullDuration = foldMap unwrapSegment durations

  -- 5. Classify movement of frames
  let classifiedFrames =
        Pipes.each pixelFrames
        & classifyMovement minStillSegmentTime
        & Pipes.toList

  -- 6. Classify moving scene time spans
  let classified =
        (Pipes.each classifiedFrames
         & classifyMovingScenes fullDuration)
        >-> Pipes.drain
        & Pipes.runEffect
        & runIdentity

  -- 7. Check classified time span equivalence
  expectedSegments === classified

  where
    resolution = 10 :. 10

Steps 1–3 are the same as in the previous property test. From there, this test:

  1. Converts the generated output segments into a list of time spans. Each time span marks the start and end of an expected moving segment. Furthermore, it needs the full duration of the input in step 6, so that’s computed here.
  2. Classify the movement of each frame, i.e. if it’s part of a moving or still segment.
  3. Run the second classifier function called classifyMovingScenes, based on the full duration and the frames with classified movement data, resulting in a list of time spans.
  4. Compare the expected and actual classified list of time spans.

While this test looks somewhat complicated with its setup and various conversions, the core idea is simple. But is it effective?

Bugs! Bugs everywhere!

Preparing for a talk on property-based testing, I added the “moving segment time spans” property a week or so before the event. At this time, I had used Komposition to edit multiple screencasts. Surely, all significant bugs were caught already. Adding property tests should only confirm the level of quality the application already had. Right?

Nope. First, I discovered that my existing tests were fundamentally incorrect to begin with. They were not reflecting the specification I had in mind, the one I described in the beginning of this post.

Furthermore, I found that the generators had errors. At first, I used Hedgehog to generate the pixels used for the classifier input. Moving frames were based on a majority of randomly colored pixels and a small percentage of equally colored pixels. Still frames were based on a random single color.

The problem I had not anticipated was that the colors used in moving frames were not guaranteed to be distinct from the color used in still frames. In small-sized examples I got black frames at the beginning and end of moving segments, and black frames for still segments, resulting in different classified output than expected. Hedgehog shrinking the failing examples’ colors towards 0, which is black, highlighted this problem even more.

I made my generators much simpler, using the alternating white/gray frames approach described earlier, and went on to running my new shiny tests. Here’s what I got:

What? Where does 0s–0.6s come from? The classified time span should’ve been 0s–1s, as the generated output has a single moving scene of 10 frames (1 second at 10 FPS). I started digging, using the annotate function in Hedgehog to inspect the generated and intermediate values in failing examples.

I couldn’t find anything incorrect in the generated data, so I shifted focus to the implementation code. The end timestamp 0.6s was consistently showing up in failing examples. Looking at the code, I found a curious hard-coded value 0.5 being bound and used locally in classifyMovement.

The function is essentially a fold over a stream of frames, where the accumulator holds vectors of previously seen and not-yet-classified frames. Stripping down and simplifying the old code to highlight one of the bugs, it looked something like this:

classifyMovement minStillSegmentTime =
  case ... of
    InStillState{..} ->
      if someDiff > minEqualTimeForStill
        then ...
        else ...
    InMovingState{..} ->
      if someOtherDiff >= minStillSegmentTime
        then ...
        else ...
  where
    minEqualTimeForStill = 0.5

Let’s look at what’s going on here. In the InStillState branch it uses the value minEqualTimeForStill, instead of always using the minStillSegmentTime argument. This is likely a residue from some refactoring where I meant to make the value a parameter instead of having it hard-coded in the definition.

Sparing you the gory implementation details, I’ll outline two more problems that I found. In addition to using the hard-coded value, it incorrectly classified frames based on that value. Frames that should’ve been classified as “moving” ended up “still”. That’s why I didn’t get 0s–1s in the output.

Why didn’t I see 0s–0.5s, given the hard-coded value 0.5? Well, there was also an off-by-one bug, in which one frame was classified incorrectly together with the accumulated moving frames.

The classifyMovement function is 30 lines of Haskell code juggling some state, and I managed to mess it up in three separate ways at the same time. With these tests in place I quickly found the bugs and fixed them. I ran thousands of tests, all passing.

Finally, I ran the application, imported a previously recorded video, and edited a short screencast. The classified moving segments where notably better than before.

Summary

A simple streaming fold can hide bugs that are hard to detect with manual testing. The consistent result of 0.6, together with the hard-coded value 0.5 and a frame rate of 10 FPS, pointed clearly towards an off-by-one bug. I consider this is a great showcase of how powerful shrinking in PBT is, consistently presenting minimal examples that point towards specific problems. It’s not just a party trick on ideal mathematical functions.

Could these errors have been caught without PBT? I think so, but what effort would it require? Manual testing and introspection did not work for me. Code review might have revealed the incorrect definition of minEqualTimeForStill, but perhaps not the off-by-one and incorrect state handling bugs. There are of course many other QA techniques, I won’t evaluate all. But given the low effort that PBT requires in this setting, the amount of problems it finds, and the accuracy it provides when troubleshooting, I think it’s a clear win.

I also want to highlight the iterative process that I find naturally emerges when applying PBT:

  1. Think about how your system is supposed to work. Write down your specification.
  2. Think about how to generate input data and how to test your system, based on your specification. Tune your generators to provide better test data. Try out alternative styles of properties. Perhaps model-based or metamorphic testing fits your system better.
  3. Run tests and analyze the minimal failing examples. Fix your implementation until all tests pass.

This can be done when modifying existing code, or when writing new code. You can apply this without having any implementation code yet, perhaps just a minimal stub, and the workflow is essentially the same as TDD.

Coming Up

The final post in this series will cover testing at a higher level of the system, with effects and multiple subsystems being integrated to form a full application. We will look at property tests that found many bugs and that made a substantial refactoring possible.

  1. Introduction
  2. Timeline Flattening
  3. Video Scene Classification
  4. Integration Testing

Until then, thanks for reading!

Credits

Thank you Ulrik Sandberg, Pontus Nagy, and Fredrik Björeman for reviewing drafts of this post.

Footnotes


  1. See the “Oracle Generators” section in Finding Property Tests.↩︎

  2. John Hughes’ talk Building on developers’ intuitions goes into depth on this. There’s also work being done to provide similar functionality for Hedgehog.↩︎

April 16, 2019 10:00 PM

June 22, 2013

Shayne Fletcher

Maybe

There are different approaches to the issue of not having a value to return. One idiom to deal with this in C++ is the use of boost::optional<T> or std::pair<bool, T>.

class boost::optional<T> //Discriminated-union wrapper for values.

Maybe is a polymorphic sum type with two constructors : Nothing or Just a.
Here's how Maybe is defined in Haskell.


{- The Maybe type encapsulates an optional value. A value of type
Maybe a either contains a value of type a (represented as Just a), or
it is empty (represented as Nothing). Using Maybe is a good way to
deal with errors or exceptional cases without resorting to drastic
measures such as error.

The Maybe type is also a monad.
It is a simple kind of error monad, where all errors are
represented by Nothing. -}

data Maybe a = Nothing | Just a

{- The maybe function takes a default value, a function, and a Maybe
value. If the Maybe value is Nothing, the function returns the default
value. Otherwise, it applies the function to the value inside the Just
and returns the result. -}

maybe :: b -> (a -> b) -> Maybe a -> b
maybe n _ Nothing = n
maybe _ f (Just x) = f x

I haven't tried to compile the following OCaml yet but I think it should be roughly OK.

type 'a option = None | Some of 'a ;;

let maybe n f a =
match a with
| None -> n
| Some x -> f x
;;

Here's another variant on the Maybe monad this time in Felix. It is applied to the problem of "safe arithmetic" i.e. the usual integer arithmetic but with guards against under/overflow and division by zero.


union success[T] =
| Success of T
| Failure of string
;

fun str[T] (x:success[T]) =>
match x with
| Success ?t => "Success " + str(t)
| Failure ?s => "Failure " + s
endmatch
;

typedef fun Fallible (t:TYPE) : TYPE => success[t] ;

instance Monad[Fallible]
{
fun bind[a, b] (x:Fallible a, f: a -> Fallible b) =>
match x with
| Success ?a => f a
| Failure[a] ?s => Failure[b] s
endmatch
;

fun ret[a](x:a):Fallible a => Success x ;
}

//Safe arithmetic.

const INT_MAX:int requires Cxx_headers::cstdlib ;
const INT_MIN:int requires Cxx_headers::cstdlib ;

fun madd (x:int) (y:int) : success[int] =>
if x > 0 and y > (INT_MAX - x) then
Failure[int] "overflow"
else
Success (y + x)
endif
;

fun msub (x:int) (y:int) : success[int] =>
if x > 0 and y < (INT_MIN + x) then
Failure[int] "underflow"
else
Success (y - x)
endif
;

fun mmul (x:int) (y:int) : success[int] =>
if x != 0 and y > (INT_MAX / x) then
Failure[int] "overflow"
else
Success (y * x)
endif
;

fun mdiv (x:int) (y:int) : success[int] =>
if (x == 0) then
Failure[int] "attempted division by zero"
else
Success (y / x)
endif
;

//--
//
//Test.

open Monad[Fallible] ;

//Evalue some simple expressions.

val zero = ret 0 ;
val zero_over_one = bind ((Success 0), (mdiv 1)) ;
val undefined = bind ((Success 1),(mdiv 0)) ;
val two = bind((ret 1), (madd 1)) ;
val two_by_one_plus_one = bind (two , (mmul 2)) ;

println$ "zero = " + str zero ;
println$ "1 / 0 = " + str undefined ;
println$ "0 / 1 = " + str zero_over_one ;
println$ "1 + 1 = " + str two ;
println$ "2 * (1 + 1) = " + str (bind (bind((ret 1), (madd 1)) , (mmul 2))) ;
println$ "INT_MAX - 1 = " + str (bind ((ret INT_MAX), (msub 1))) ;
println$ "INT_MAX + 1 = " + str (bind ((ret INT_MAX), (madd 1))) ;
println$ "INT_MIN - 1 = " + str (bind ((ret INT_MIN), (msub 1))) ;
println$ "INT_MIN + 1 = " + str (bind ((ret INT_MIN), (madd 1))) ;

println$ "--" ;

//We do it again, this time using the "traditional" rshift-assign
//syntax.

syntax monad //Override the right shift assignment operator.
{
x[ssetunion_pri] := x[ssetunion_pri] ">>=" x[>ssetunion_pri] =># "`(ast_apply ,_sr (bind (,_1 ,_3)))";
}
open syntax monad;

println$ "zero = " + str (ret 0) ;
println$ "1 / 0 = " + str (ret 1 >>= mdiv 0) ;
println$ "0 / 1 = " + str (ret 0 >>= mdiv 1) ;
println$ "1 + 1 = " + str (ret 1 >>= madd 1) ;
println$ "2 * (1 + 1) = " + str (ret 1 >>= madd 1 >>= mmul 2) ;
println$ "INT_MAX = " + str (INT_MAX) ;
println$ "INT_MAX - 1 = " + str (ret INT_MAX >>= msub 1) ;
println$ "INT_MAX + 1 = " + str (ret INT_MAX >>= madd 1) ;
println$ "INT_MIN = " + str (INT_MIN) ;
println$ "INT_MIN - 1 = " + str (ret INT_MIN >>= msub 1) ;
println$ "INT_MIN + 1 = " + str (ret INT_MIN >>= madd 1) ;
println$ "2 * (INT_MAX/2) = " + str (ret INT_MAX >>= mdiv 2 >>= mmul 2 >>= madd 1) ; //The last one since we know INT_MAX is odd and that division will truncate.
println$ "2 * (INT_MAX/2 + 1) = " + str (ret INT_MAX >>= mdiv 2 >>= madd 1 >>= mmul 2) ;

//--
That last block using the <<= syntax produces (in part) the following output (the last two print statments have been truncated away -- the very last one produces an expected overflow).

by Shayne Fletcher (noreply@blogger.com) at June 22, 2013 09:07 PM

January 29, 2020

Joey Hess

announcing arduino-copilot

arduino-copilot, released today, makes it easy to use Haskell to program an Arduino. It's a FRP style system, and uses the Copilot DSL to generate embedded C code.

gotta blink before you can run

To make your arduino blink its LED, you only need 4 lines of Haskell:

import Copilot.Arduino
main = arduino $ do
    led =: blinking
    delay =: constant (MilliSeconds 100)

Running that Haskell program generates an Arduino sketch in an .ino file, which can be loaded into the Arduino IDE and uploaded to the Arduino the same as any other sketch. It's also easy to use things like Arduino-Makefile to build and upload sketches generated by arduino-copilot.

shoulders of giants

Copilot is quite an impressive embedding of C in Haskell. It was developed for NASA by Galois and is intended for safety-critical applications. So it's neat to be able to repurpose it into hobbyist microcontrollers. (I do hope to get more type safety added to Copilot though, currently it seems rather easy to confuse eg miles with kilometers when using it.)

I'm not the first person to use Copilot to program an Arduino. Anthony Cowley showed how to do it in Abstractions for the Functional Roboticist back in 2013. But he had to write a skeleton of C code around the C generated by Copilot. Amoung other features, arduino-copilot automates generating that C skeleton. So you don't need to remember to enable GPIO pin 13 for output in the setup function; arduino-copilot sees you're using the LED and does that for you.

frp-arduino was a big inspiration too, especially how easy it makes it to generate an Arduino sketch withough writing any C. The "=:" operator in copilot-arduino is copied from it. But ftp-arduino contains its own DSL, which seems less capable than Copilot. And when I looked at using frp-arduino for some real world sensing and control, it didn't seem to be possible to integrate it with existing Arduino libraries written in C. While I've not done that with arduino-copilot yet, I did design it so it should be reasonably easy to integrate it with any Arduino library.

a more interesting example

Let's do something more interesting than flashing a LED. We'll assume pin 12 of an Arduino Uno is connected to a push button. When the button is pressed, the LED should stay lit. Otherwise, flash the LED, starting out flashing it fast, but flashing slower and slower over time, and then back to fast flashing.

{-# LANGUAGE RebindableSyntax #-}
import Copilot.Arduino.Uno

main :: IO ()
main = arduino $ do
        buttonpressed <- readfrom pin12
        led =: buttonpressed || blinking
        delay =: MilliSeconds (longer_and_longer * 2)

This is starting to use features of the Copilot DSL; "buttonpressed || blinking" combines two FRP streams together, and "longer_and_longer * 2" does math on a stream. What a concise and readable implementation of this Arduino's behavior!

Finishing up the demo program is the implementation of longer_and_longer. This part is entirely in the Copilot DSL, and actually I lifted it from some Copilot example code. It gives a reasonable flavor of what it's like to construct streams in Copilot.

longer_and_longer :: Stream Int16
longer_and_longer = counter true $ counter true false `mod` 64 == 0

counter :: Stream Bool -> Stream Bool -> Stream Int16
counter inc reset = cnt
   where
        cnt = if reset then 0 else if inc then z + 1 else z
        z = [0] ++ cnt

This whole example turns into just 63 lines of C code, which compiles to a 1248 byte binary, so there's plenty of room left for larger, more complex programs.

simulating an Arduino

One of Copilot's features is it can interpret code, without needing to run it on the target platform. So the Arduino's behavior can be simulated, without ever generating C code, right at the console!

But first, one line of code needs to be changed, to provide some button states for the simulation:

        buttonpressed <- readfrom' pin12 [False, False, False, True, True]

Now let's see what it does:

# runghc demo.hs -i 5
delay:         digitalWrite_13: 
(2)            (13,false)    
(4)            (13,true)     
(8)            (13,false)    
(16)           (13,true)     
(32)           (13,true)     

Which is exactly what I described it doing! To prove that it always behaves correctly, you could use copilot-theorem.

peek at C

Let's look at the C code that is generated by the first example, of blinking the LED.

This is not the generated code, but a representation of how the C compiler sees it, after constant folding, and some very basic optimisation. This compiles to the same binary as the generated code.

void setup() {
      pinMode(13, OUTPUT);
}
void loop(void) {
      delay(100);
      digitalWrite(13, s0[s0_idx]);
      s0_idx = (++s0_idx) % 2;
}

If you compare this with hand-written C code to do the same thing, this is pretty much optimal!

Looking at the C code generated for the more complex example above, you'll see few unnecessary double computations. That's all I've found to complain about with the generated code. And no matter what you do, Copilot will always generate code that runs in constant space, and constant time.


Development of arduino-copilot was sponsored by Trenton Cronholm and Jake Vosloo on Patreon.

January 29, 2020 01:20 AM

Donnacha Oisín Kidney

Terminating Tricky Traversals

Posted on January 29, 2020
Tags: Agda, Haskell

Imports

{-# OPTIONS --cubical --sized-types #-}

module Post where

open import ../code/terminating-tricky-traversals/Post.Prelude

Just a short one today. I’m going to look at a couple of algorithms for breadth-first traversals with complex termination proofs.

Breadth-First Graph Traversal

In a previous post I talked about breadth-first traversals over graphs, and the difficulties that cycles cause. Graphs are especially tricky to work with in a purely functional language, because so many of the basic algorithms are described in explicitly mututing terms (i.e. “mark off a node as you see it”), with no obvious immutable translation. The following is the last algoirthm I came up with:

As difficult as it is to work with graphs in a pure functional language, it’s even more difficult to work in a total language, like Agda. Looking at the above function, there are several bits that we can see right off the bat won’t translate over easily. Let’s start with fix.

We shouldn’t expect to be able to write fix in Agda as-is. Just look at its Haskell implementation:

It’s obviously non total!

(this is actually a non-memoizing version of fix, which is different from the usual one)

We can write a function like fix, though, using coinduction and sized types.

record Thunk (A : Size  Type a) (i : Size) : Type a where
  coinductive
  field force :  {j : Size< i}  A j
open Thunk public

fix : (A : Size  Type a)  (∀ {i}  Thunk A i  A i)   {j}  A j
fix A f = f λ where .force  fix A f

Coinductive types are the dual to inductive types. Totality-wise, a coinductive type must be “productive”; i.e. a coinductive list can be infinitely long, but it must be provably able to evaluate to a constructor (cons or nil) in finite time.

Sized types also help us out here: they’re quite subtle, and a little finicky to use occasionally, but they are invaluable when it comes to proving termination or productivity of complex (especially higher-order) functions. The canonical example is mapping over the following tree type:

module NonTerminating where
  data Tree (A : Type a) : Type a where
    _&_ : A  List (Tree A)  Tree A

  {-# TERMINATING #-}
  mapTree : (A  B)  Tree A  Tree B
  mapTree f (x & xs) = f x & map (mapTree f) xs

The compiler can’t tell that the recursive call in the mapTree function will only be called on subnodes of the argument: it can’t tell that it’s structurally recursive, in other words. Annoyingly, we can fix the problem by inlining map.

  mutual
    mapTree′ : (A  B)  Tree A  Tree B
    mapTree′ f (x & xs) = f x & mapForest f xs

    mapForest : (A  B)  List (Tree A)  List (Tree B)
    mapForest f [] = []
    mapForest f (x  xs) = mapTree′ f x  mapForest f xs

The other solution is to give the tree a size parameter. This way, all submodes of a given tree will have smaller sizes, which will give the compiler a finite descending chin condition it can use to prove termination.

data Tree (A : Type a) (i : Size) : Type a where
  _&_ : A   {j : Size< i}  List (Tree A j)  Tree A i

mapTree : (A  B)  Tree A i  Tree B i
mapTree f (x & xs) = f x & map (mapTree f) xs

So how do we use this stuff in our graph traversal? Well first we’ll need a coinductive Stream type:

record Stream (A : Type a) (i : Size) : Type a where
  coinductive
  field
    head : A
    tail :  {j : Size< i}  Stream A j
open Stream public

smap : (A  B)  Stream A i  Stream B i
smap f xs .head = f (xs .head)
smap f xs .tail = smap f (xs .tail)

And then we can use it to write our breadth-first traversal.

bfs :  _ : IsDiscrete A   (A  List A)  A  Stream (List A) i
bfs g r = smap fst (fix (Stream _) (f r  push))
  where
  push : Thunk (Stream _) i  Stream _ i
  push xs .head = ([] , [])
  push xs .tail = smap (_,_ []  snd) (xs .force)

  f : _  Stream _ i  Stream _ i
  f x qs with (x ∈? qs .head .snd) .does
  ... | true = qs
  ... | false = λ where .head  (x  qs .head .fst , x  qs .head .snd)
                        .tail  foldr f (qs .tail) (g x)

How do we convert this to a list of lists? Well, for this condition we would actually need to prove that there are only finitely many elements in the graph. We could actually use Noetherian finiteness for this: though I have a working implementation, I’m still figuring out how to clean this up, so I will leave it for another post.

Traversing a Braun Tree

A recent paper (Nipkow and Sewell 2020) provided Coq proofs for some algorithms on Braun trees (Okasaki 1997), which prompted me to take a look at them again. This time, I came up with an interesting linear-time toList function, which relies on the following peculiar type:

Even after coming up with the type myself, I still can’t really make heads nor tails of it. If I squint, it starts to look like some bizarre church-encoded binary number (but I have to really squint). It certainly seems related to corecursive queues (Smith 2009).

Anyway, we can use the type to write the following lovely toList function on a Braun tree.

So can we convert it to Agda?

Not really! As it turns out, this function is even more difficult to implement than one might expect. We can’t even write the Q2 type in Agda without getting in trouble.

{-# NO_POSITIVITY_CHECK #-}
record Q2 (A : Type a) : Type a where
  inductive
  field
    q2 : (Q2 A  Q2 A) 
         (Q2 A  Q2 A) 
         A
open Q2

Q2 isn’t strictly positive, unfortunately.

{-# TERMINATING #-}
toList : Braun A  List A
toList t = f t n .q2 id id
  where
  n : Q2 A
  n .q2 ls rs = ls (rs n) .q2 id id

  f : Braun A  Q2 (List A)  Q2 (List A)
  f leaf         xs .q2 ls rs = []
  f (node x l r) xs .q2 ls rs = x  xs .q2 (ls  f l) (rs  f r)

Apparently this problem of strict positivity for breadth-first traversals has come up before: Berger, Matthes, and Setzer (2019); Hofmann (1993).

References

Berger, Ulrich, Ralph Matthes, and Anton Setzer. 2019. “Martin Hofmann’s Case for Non-Strictly Positive Data Types.” In 24th international conference on types for proofs and programs (TYPES 2018), ed by. Peter Dybjer, José Espírito Santo, and Luís Pinto, 130:22. Leibniz international proceedings in informatics (LIPIcs). Dagstuhl, Germany: Schloss DagstuhlLeibniz-Zentrum fuer Informatik. doi:10.4230/LIPIcs.TYPES.2018.1. http://drops.dagstuhl.de/opus/volltexte/2019/11405.

Hofmann, Martin. 1993. “Non Strictly Positive Datatypes in System F.” https://www.seas.upenn.edu/~sweirich/types/archive/1993/msg00027.html.

Nipkow, Tobias, and Thomas Sewell. 2020. “Proof pearl: Braun trees.” In Certified programs and proofs, CPP 2020, ed by. J. Blanchette and C. Hritcu, –. ACM. http://www21.in.tum.de/~nipkow/pubs/cpp20.html.

Okasaki, Chris. 1997. “Three Algorithms on Braun Trees.” Journal of Functional Programming 7 (6) (November): 661–666. doi:10.1017/S0956796897002876. https://www.eecs.northwestern.edu/~robby/courses/395-495-2013-fall/three-algorithms-on-braun-trees.pdf.

Smith, Leon P. 2009. “Lloyd Allison’s Corecursive Queues: Why Continuations Matter.” The Monad.Reader 14 (14) (July): 28. https://meldingmonads.files.wordpress.com/2009/06/corecqueues.pdf.

by Donnacha Oisín Kidney at January 29, 2020 12:00 AM

January 28, 2020

Manuel M T Chakravarty

Extending Bitcoin-style Ledgers

In the Plutus & Marlowe team at IOHK, we developed an extension to Bitcoin-style UTxO ledgers that we are calling the Extended UTxO Model and that significantly extends the contract scripting capabilities of such ledgers. On top of that new, more powerful ledger model, we developed a domain-specific language for financial contracts, called Marlowe. We have got two papers at the 4th Workshop on Trusted Smart Contracts where we describe both the Extended UTxO Model and Marlowe. Check out the preprints: The Extended UTxO Model and Marlowe: implementing and analysing financial contracts on blockchain.

January 28, 2020 09:03 PM

Functional Jobs

Senior Software Engineer at Habito (Full-time)

Help us set people free from the hell of mortgages

You will become a domain expert by immersing yourself in the wider business to understand the mortgage market and the technology that underpins how we set people free from the hell of mortgages. In collaboration with the crew’s stakeholders, including product management, you’ll play a leading role in shaping the product and the work you and your teammates will deliver.

Our engineers work in cross-functional teams typically comprising of 7 to 8 people including product owners, designers and contributors which are empowered to meet their goals. We seek to give all our crews the resources and support they need with a minimum of constraints.

You have a delivery focussed mindset and make use of agile software development practices to ensure your crew’s success. We value collaboration, repeatability and continuous improvement.

We look for people who love to learn new things whilst using their existing skills and experience to enrich the engineering team.

As a senior engineer, your responsibilities will encompass (but not be limited to):

  • Driving the continued development of our codebase using a broad range of technologies cross our entire stack by writing well formed and properly documented code.
  • Playing a leading role in the evolution of our system architecture, driving best practices and development processes throughout your crew and the wider engineering community.
  • Defining and delivering improvements to our culture of automated testing and specification.
  • Introducing and role modelling new tools and techniques that increase the speed of delivery, stability of our systems and drive an improved customer experience.
  • Collaborating with your crew’s stakeholders and product managers to shape the work that the crew undertakes being proactive in identifying issues with scoping and requirements.
  • Working closely with non-technical crew members to focus on building systems that directly serve our customers as much as possible and deliver simple solutions to complex problems.
  • Mentoring and growing our awesome engineering community by supporting the professional development of your crew and playing an active role in recruitment.

We use lots of exciting technology

We value well architected solutions with reliable and maintainable code using the right software engineering principles above all else. Our engineers want to deliver the best possible solution they can and you will play a key role in ensuring our continued success in doing so.

We’re big believers in using the right tools for the job to build the best software. Right now we mainly make use of functional programming and tenets commonly associated with it but are always evaluating our tools.

Our existing systems make heavy use of Haskell, PureScript and Typescript, Hakyll, Bazel and Nix as well as PostgreSQL. We deploy and operate our software using Docker and Kubernetes in AWS.

We believe in learning and development

Our engineers come from a number of backgrounds. We have self-taught team members working with graduates from universities and bootcamps alike. Some of us have worked in large corporations while some of us have only ever known startups. A portion of the team consistently enjoy full-stack work, others prefer to specialise in certain areas. While your role and crew might see you targeting certain areas of work, we see this only as a specialism. You can expect to be exposed to many other parts of our codebase and to learn about and participate in its development while growing your own skills and supporting the growth of other engineers in your crew. From regular talks and reading groups to sponsored meet-ups and conference attendance, we want to help take our engineers to the next level.

Get information on how to apply for this position.

January 28, 2020 05:04 PM

Mark Jason Dominus

James Blaine keeps turning up

Today I learned that James Blaine (U.S. Speaker of the House, senator, perennial presidential candidate, and Secretary of State under Presidents Cleveland, Garfield, and Arthur; previously) was the namesake of the notorious “Blaine Amendments”. These are still an ongoing legal issue!

The Blaine Amendment was a proposed U.S. constitutional amendment rooted in anti-Catholic, anti-immigrant sentiment, at a time when the scary immigrant bogeymen were Irish and Italian Catholics.

The amendment would have prevented the U.S. federal government from providing aid to any educational institution with a religious affiliation; the specific intention was to make Catholic parochial schools ineligible for federal education funds. The federal amendment failed, but many states adopted it and still have it in their state constitutions.

Here we are 150 years later and this is still an issue! It was the subject of the 2017 Supreme Court case Trinity Lutheran Church of Columbia, Inc. v. Comer. My quick summary is:

  1. The Missouri state Department of Natural Resources had a program offering grants to licensed daycare facilities to resurface their playgrounds with shredded tires.

  2. In 2012, a daycare facility operated by Trinity Lutheran church ranked fifth out of 44 applicants according to the department’s criteria.

  3. 14 of the 44 applicants received grants, but Trinity Lutheran's daycare was denied, because the Missouri constitution has a Blaine Amendment.

  4. The Court found (7–2) that denying the grant to an otherwise qualified daycare just because of its religious affiliation was a violation of the Constitution's promises of free exercise of religion. (Full opinion)

It's interesting to me that now that Blaine is someone I recognize, he keeps turning up. He was really important, a major player in national politics for thirty years. But who remembers him now?

by Mark Dominus (mjd@plover.com) at January 28, 2020 03:51 PM

January 27, 2020

Monday Morning Haskell

Hpack: A Simpler Package Format

backpack.jpg

In the last few weeks, we've gone through the basics of Cabal and Stack. These are two of the most common package managers for Haskell. Both of these programs help us manage dependencies for our code, and compile it. Both programs use the .cabal file format, as we explored in this article.

But .cabal files have some weaknesses, as we'll explore. Luckily, there's another tool out there called Hpack. With this tool, we'll use a different file format for our project, in a file called package.yaml. We'll run the hpack command, which will read the package file and generate the .cabal file. In this article, we'll explore how this program works.

In our free Stack mini-course, you'll learn how to use Stack as well as Hpack! If you're new to Haskell, you can also read our Liftoff series to brush up on your skills!

Cabal File Issues

One of the first weaknesses with the .cabal file is that it uses its own unique format. It doesn't use something more common like XML, JSON, YAML, or Markdown. So there's a small learning curve when it comes to questions of format. For instance, what are good indentation practices? What is the "correct" way to make a list of things? When are commas necessary, or not? And if, for whatever reason, you want to parse whatever is in your package file, you'll need a custom parser.

When using Hpack, we'll still have a package file, package.yaml. But this file uses a YAML format. So if your previous work has involved YAML files, that knowledge is more transferable. And if you haven't yet, it's likely you will use YAML at some point in the future. Plus every major language can parse YAML with ease.

If you're making a project with many executables and tests, you'll also find your .cabal file has a lot of duplication. You'll need to repeat certain fields for each section. Different executables could have the same GHC options and language extensions. The different sections will also tend to have a lot of dependencies in common

In the rest of this article, we'll see how Hpack solves these problems. But first, we need to get it up and running.

Installing and Using Hpack

The Hpack program is an executable you can get from Stack. Within your project directory, you just need to run this command:

stack install hpack

After this, you should be able to run the hpack command anywhere on your system. If you run it in any directory containing a package.yaml file, the command will use that to generate the .cabal file. We'll explore this package file format in the next section.

When using Hpack, you generally not commit your .cabal file to the Github repository. Instead, put it in .gitignore. Your README should clarify that users need to run hpack the first time they clone the repository.

As an extra note, Hpack is so well thought of that the default Stack template will include package.yaml in your starter project! This saves you from having to write it from scratch.

Package File

But how is this file organized anyway? Obviously we haven't eliminated the work of writing a package file. We've just moved it from the .cabal file to the package.yaml file. But what does this file look like? Well, it has a very similar structure to the Cabal file. But there are a few simplifications, as we'll see. To start, we have a metadata section at the top which is almost identical to that in the Cabal file.

name: MyHpackProject
version: 0.1.0.0
github: jhb563/MyHpackProject
license: MIT
author: "James Test"
maintainer: "james@test.com"
copyright: "Monday Morning Haskell 2020"

extra-source-files:
  - README.md

These lines get translated almost exactly. Various other fields get default values. One exception is that the github repository name will give us a couple extra links for free in the .cabal file.

-- Generated automatically in MyHpackProject.cabal!
homepage: https://github.com/jhb563/MyHpackProject#readme
bug-reports: https://github.com/jhb563/MyHpackProject/issues

source-repository head
  type: git
  location: https://github.com/jhb563/MyHpackProject

After the metadata, we have a separate section for global items. These include things such as dependencies and GHC options. We write these as top level keys in the YAML. We'll see how these factor into our generated file later!

dependencies:
  - base >=4.9 && <4.10

ghc-options:
  - -Wall

Now we get into individual sections for the different elements of our package. But these sections can be much shorter than they are in the .cabal file! For the library portion, we can get away with only listing the source directory!

library:
  source-dirs: src

This simple description gets translated into the library section of the .cabal file:

library
  exposed-modules:
      Lib
  other-modules:
      Paths_MyHpackProject
  hs-source-dirs:
      src
  build-depends:
      base >=4.9 && <4.10
  default-language: Haskell2010

Note that Paths_XXX is an auto-generated module of sorts. Stack uses it during the build process. This is one of a few different parts of this section that Hpack generates for us.

Executables are a bit different in that we group them all together in a single key. We use the top level key executables and then have a separate sub-key for each different binary. These can have their own dependencies and GHC options.

executables:
  run-project-1:
    main: Run1.hs
    source-dirs: app
    ghc-options:
      - -threaded
    dependencies:
      - MyHpackProject
  run-project-2:
    main: Run2.hs
    source-dirs: app
    dependencies:
      - MyHpackProject

From this, we'll get two different exectuable sections in our .cabal file! Note that these inherit the "global" dependency on base and the GHC option -Wall.

exectuable run-project-1
  main-is: Main.hs
  other-modules:
      Paths_MyHpackProject
  hs-source-dirs:
      app
  build-depends:
      MyHpackProject
    , base >=4.9 && <4.10
  ghc-options: -Wall -threaded
  default-language: Haskell2010

executable run-project-2
  ...

Test suites function in much the same way as executables. You'll just want a separate section tests after your executables.

Module Inference

So far we've saved ourselves from writing a bit of non-intuitive boilerplate. But there are more gains to be had! One annoyance of the .cabal file is that you will see error or warning messages if any of your modules aren't listed. So when you make a new module, you always have to update .cabal!

Hpack fixes this issue for us by inferring the layout of our modules! Notice how we made no mention of the individual modules in package.yaml above. But they still appeared in the .cabal file. If we don't specify, Hpack will search our source directory for all Haskell source files. It will assume they all go under exposed-modules. So even if we have a few more files, everything gets listed with the same basic description of our library.

-- Files in source directory
-- src/Lib.hs
-- src/Parser.hs
-- src/Router.hs
-- src/Internal/Helpers.hs

...
-- Hpack Library Section
library:
  source-dirs: src

-- Cabal File Library Section
library
  exposed-modules:
    , Internal.Helpers
    , Lib
    , Parser
    , Router
  other-modules:
      Paths_MyHpackProject
  ...

Hpack also takes care of alphabetizing our modules!

There are, of course, times when we don't want to expose all our modules. In this case, we can list the modules that should remain as "other" in our package file. The rest still appear under exposed-modules.

-- Package File
library:
  source-dirs: src
  other-modules:
    - Internal.Helpers

-- Cabal File
library
  exposed-modules:
    , Lib
    , Parser
    , Router
  other-modules:
      Internal.Helpers
  ...

If you want the external API to be more limited, you can also explicitly list the exposed modules. Hpack infers that the rest fall under "other".

-- Package File
library:
  source-dirs: src
  exposed-modules:
    - Lib

Remember that you still need to run the hpack command when you add a new module! Otherwise there's no update to the .cabal file. This habit takes a little while to learn but it's still easier than editing the file each time!

Reducing Duplication

There's one more area where we can get some de-duplication of effort. This is in the use of "global" values for dependencies and compiler flags.

Normally, the library, executables and test suites must each list all their dependencies and the options they need. So for example, we might find that all our elements use a particular version of the base and aeson libraries, as well as the -Wall flag.

library
  ghc-options: -Wall
  build-depends:
      base >=4.9 && <4.10
    , aeson
  ...

exectuable run-project-1
  ghc-options: -Wall -threaded
  build-depends:
      MyHpackProject
    , aeson
    , base >=4.9 && <4.10
  ...

With Hpack, we can simplify this by creating global values for these. We'll add dependencies and ghc-options as top level keys in our package file. Then each element can include its own dependencies and options as needed. The following will produce the same .cabal file output as above.

dependencies:
  - base >=4.9 && <4.10
  - aeson

ghc-options:
  - -Wall

library:
  source-dirs: src

executables:
  run-project-1:
    ghc-options:
      - -Wall
    dependencies:
      - MyHpackProject

Conclusion

Hpack isn't a cure-all. We've effectively replaced our .cabal file with the package.yaml file. At the end of the day, we still have to put some effort into our package management process. But Hpack saves a good amount of duplicated and manual work we would need to do if we were using the .cabal file by itself. But you need to remember when to run Hpack! Otherwise it can get frustrating. Whenever you have some event that would alter the .cabal file, you need to re-run the command! Do it whenever you add a new module or build dependency!

Next week, we'll start looking at Nix, another popular package manager among Haskellers!

by James Bowen at January 27, 2020 03:30 PM

Neil Mitchell

One Haskell IDE to rule them all

Summary: The Haskell IDE Engine and Ghcide teams are joining forces on a single IDE.

This weekend many of the Haskell IDE Engine (HIE) and Ghcide developers met up for the Bristol Hackathon. Writing an IDE is a lot of work, and the number of contributors is finite, so combining forces has always seemed like a good idea. We now have a plan to combine our efforts, taking the best of each, and putting them together. Taking a look at the best features from each:

  • HIE provides a lot of plugins which extend the IDE. A choice of three formatters. LiquidHaskell. HLint and Hoogle. There are lots, and they are quite mature.
  • HIE has great build scripts that build the IDE for lots of different compilers and configurations.
  • HIE has gained lots of arcane knowledge about LSP and various clients, with an understanding of how best to respond in terms of latency/liveness.
  • HIE has driven a lot of improvements in the GHC API.
  • HIE has pioneered a lot of code that Ghcide subsequently reused, e.g. completions and hover.
  • Ghcide uses a Shake graph based approach to manage the IDE state, allowing a simpler programming model.
  • Ghcide breaks the GHC monad apart, making it easier to do tricks like reusing .hi files and multi-component builds with a single GHC session.
  • Both projects use the same set of underlying libraries - haskell-lsp, lsp-test and hie-bios.

Putting these together, we've decided that the best way forward is to create a new project at haskell/ide which combines the best of both. That project will be a source of plugins and a complete distribution of an IDE. Under the hood, it will use Ghcide as a library, making use of the core graph and logic. The new IDE project will take over plugins and build system from HIE. There are some things in Ghcide that will be separated into plugins, e.g. the code actions. There are some ideas in HIE that will be folded into Ghcide, e.g. liveness, plugin wrappers and LSP quirks. Together, we hope to create a world class IDE experience for Haskell.

In the short term, we don't recommend anyone switch to this new IDE, as we're still putting the pieces together - continue using whatever you were using before. If you're interested in helping out, we're tracking some of the major issues in this ticket and the IDE and Ghcide repos.

Thanks to everyone who has contributed their time to both projects! The current state is a consequence of everyone's time and open collaboration. The spirit of friendship and mutual assistance typifies what is best about the Haskell community.

By Alan Zimmerman, Neil Mitchell, Moritz Kiefer and everyone at the Bristol Hackathon

by Neil Mitchell (noreply@blogger.com) at January 27, 2020 10:01 AM

FP Complete

Transformations on Applicative Concurrent Computations

When deciding which language to use to solve challenges that require heavy concurrent algorithms, it's hard to not consider Haskell. Its immutable and persistent data structures reduce the introduction of accidental complexity, and the GHC runtime facilitates the creation of thousands of (green) threads without having to worry as much about the memory and performance costs.

The epitome of Haskell's concurrent API is the async package, which provides higher-order functions (e.g. race, mapConcurrently, etc.) that allow us to run IO sub-routines and combine their results in various ways while executing concurrently. It also offers the type Concurrently which allows developers to give normal sub-routines concurrent properties, and also provides Applicative and Alternative instances that help in the creation of values from composing smaller sub-routines.

In this blog post, we will discuss some of the drawbacks of using the Concurrently type when composing sub-routines. Then we will show how we can overcome these shortcomings by taking advantage of the structural nature of the Applicative and Alternative typeclasses; re-shaping and optimizing the execution of a tree of sub-routines.

And, if you simply want to get these performance advantages in your Haskell code today, you can cut to the chase and begin using the new Conc datatype we've introduced in unliftio 0.2.9.0.

The drawbacks of Concurrently

Getting started with Concurrently is easy. We can wrap an IO a sub-routine with the Concurrently constructor, and then we can compose async values using the map (<$>), apply (<*>), and alternative (<|>) operators. An example might be:

myPureFunction :: String -> String -> String -> String
myPureFunction a b c = a ++ " " ++ b ++ " " ++ c

myComputation :: Concurrently String
myComputation =
  myPureFunction
  <$> Concurrently fetchStringFromAPI1
  <*> (    Concurrently fetchStringFromAPI2_Region1
       <|> Concurrently fetchStringFromAPI2_Region2
       <|> Concurrently fetchStringFromAPI2_Region3
       <|> Concurrently fetchStringFromAPI2_Region4)
  <*> Concurrently fetchStringFromAPI3

Let's talk a bit on the drawbacks of this approach. How many threads do you think we need to make sure all these calls execute concurrently? Try to come up with a number and an explanation and then continue reading.

I am guessing you are expecting this code to spawn six (6) threads, correct? One for each IO sub-routine that we are using. However, with the existing implementation of Applicative and Alternative in Concurrently, we will spawn at least ten (10) threads. Let's explore these instances to have a better understanding of what is going on:

instance Applicative Concurrently where
  pure = Concurrently . return
  Concurrently fs <*> Concurrently as =
    Concurrently $ (\(f, a) -> f a) <$> concurrently fs as

instance Alternative Concurrently where
  Concurrently as <|> Concurrently bs =
    Concurrently $ either id id <$> race as bs

First, let us expand the alternative calls in our example:

    Concurrently fetchStringFromAPI2_Region1
<|> Concurrently fetchStringFromAPI2_Region2
<|> Concurrently fetchStringFromAPI2_Region3
<|> Concurrently fetchStringFromAPI2_Region4

--- is equivalent to
Concurrently (
  either id id <$>
    race {- 2 threads -}
      fetchStringFromAPI2_Region1
      (either id id <$>
         race {- 2 threads -}
           fetchStringFromAPI2_Region2
           (either id id <$>
              race {- 2 threads -}
                fetchStringFromAPI2_Region3
                fetchStringFromAPI2_Region4))
)

Next, let us expand the applicative calls:

    Concurrently (myPureFunction <$> fetchStringFromAPI1)
<*> Concurrently fetchStringFromAPI2
<*> Concurrently fetchStringFromAPI3

--- is equivalent to

Concurrently (
  (\(f, a) -> f a) <$>
    concurrently {- 2 threads -}
      ( (\(f, a) -> f a) <$>
         concurrently {- 2 threads -}
           (myPureFunction <$> fetchStringFromAPI1)
           fetchStringFromAPI2
      )
      fetchStringFromAPI3
)

You may tell we are always spawning two threads for each pair of sub-routines. Suppose we have 7 sub-routines we want to compose via Applicative or Alternative. Using this implementation we would spawn at least 14 new threads when at most 8 should do the job. For each composition we do, an extra thread is going to be spawned to deal with bookkeeping.

Another drawback to consider: what happens if one of the values in the call is a pure call? Given this code:

pure foo <|> bar

We get to spawn a new thread (unnecessarily) to wait for foo, even though it has already been computed and it should always win. As we mentioned before, Haskell is an excellent choice for concurrency because it makes spawning threads cheap; however, these threads don't come for free, and we should strive to avoid redundant thread creation.

Introducing the Conc type

To address the issues mentioned above, we implemented a new type called Conc in our unliftio package. It has the same purpose as Concurrently, but it offers some extra guarantees:

  • There is going to be only a single bookkeeping thread for all Applicative and Alternative compositions.
  • If we have pure calls in an Applicative or an Alternative composition, we will not spawn a new thread.
  • We will optimize the code for trivial cases. For example, not spawning a thread when evaluating a single Conc value (instead of a composition of Conc values).
  • We can compose more than IO sub-routines. Any monadic type that implements MonadUnliftIO is accepted.
  • Children threads are always launched in an unmasked state, not the inherited state of the parent thread.

The Conc type is defined as follows:

data Conc m a where
  Action :: m a -> Conc m a
  Apply  :: Conc m (v -> a) -> Conc m v -> Conc m a
  LiftA2 :: (x -> y -> a) -> Conc m x -> Conc m y -> Conc m a
  Pure   :: a -> Conc m a
  Alt    :: Conc m a -> Conc m a -> Conc m a
  Empty  :: Conc m a

instance MonadUnliftIO m => Applicative (Conc m) where
  pure   = Pure
  (<*>)  = Apply
  (*>)   = Then
  liftA2 = LiftA2

instance MonadUnliftIO m => Alternative (Conc m) where
  (<|>) = Alt

If you are familiar with Free types, this will look eerily familiar. We are going to represent our concurrent computations as data so that we can later transform it or evaluate as we see fit. In this setting, our first example would look something like the following:

myComputation :: Conc String
myComputation =
  myPureFunction
  <$> conc fetchStringFromAPI1
  <*> (    conc fetchStringFromAPI2_Region1
       <|> conc fetchStringFromAPI2_Region2
       <|> conc fetchStringFromAPI2_Region3
       <|> conc fetchStringFromAPI2_Region4)

--- is equivalent to

Apply (myPureFunction <$> fetchStringFromAPI1)
      (Alt (Action fetchStringFromAPI2_Region1)
           (Alt (Action fetchStringFromAPI2_Region2)
                (Alt (Action fetchStringFromAPI2_Region3)
                     (Action fetchStringFromAPI2_Region4))))

You may notice we keep the tree structure of the Concurrently implementation. However, given we are dealing with a pure data structure, we can modify our Conc value to something that is easier to evaluate. Indeed, thanks to the Applicative interface, we don't need to evaluate any of the IO sub-routines to do transformations (magic!).

We have additional (internal) types that flatten all our alternatives and applicative values:

data Flat a
  = FlatApp !(FlatApp a)
  | FlatAlt !(FlatApp a) !(FlatApp a) ![FlatApp a]

data FlatApp a where
  FlatPure   :: a -> FlatApp a
  FlatAction :: IO a -> FlatApp a
  FlatApply  :: Flat (v -> a) -> Flat v -> FlatApp a
  FlatLiftA2 :: (x -> y -> a) -> Flat x -> Flat y -> FlatApp a

These types are equivalent to our Conc type, but they have a few differences from Conc:

  • The Flat type separates Conc values created via Applicative from the ones created via Alternative
  • The FlatAlt constructor flattens an Alternative tree into a list (helping us spawn all of them at once and facilitating the usage of a single bookkeeping thread).
    • Note that we represent this as a "at least two" list, with a similar representation of a non empty list from the semigroups package.
  • The Flat and FlatApp records are not polymorphic on their monadic context given they rely directly on IO. We can transform the m parameter in our Conc m a type to IO via the MonadUnliftIO constraint.

The first example of our blog post, when flattened, would look something like the following:

FlatApp
  (FlatApply
    (FlatApp (FlatAction (myPureFunction <$> fetchStringFromAPI1)))
    (FlatAlt (FlatAction fetchStringFromAPI2_Region1)
             (FlatAction fetchStringFromAPI2_Region2)
             [ FlatAction fetchStringFromAPI2_Region3
             , FlatAction fetchStringFromAPI2_Regoin4 ]))

Using a flatten function that transforms a Conc value into a Flat value, we can later evaluate the concurrent sub-routine tree in a way that is optimal for our use case.

Performance

So given that the Conc API reduces the number of threads created via Alternative, our implementation should work best, correct? Sadly, it is not all peachy. To ensure that we get the result of the first thread that finishes on an Alternative composition, we make use of the STM API. This approach works great when we want to gather values from multiple concurrent threads. Sadly, the STM monad doesn't scale too well when composing lots of reads, making this approach prohibitive if you are composing tens of thousands of Conc values.

Considering this limitation, we only use STM when an Alternative function is involved; otherwise, we rely on MVars for multiple thread result composition via Applicative. We can do this without sweating because we can change the evaluator of the sub-routine tree created by Conc on the fly.

Conclusions

We showcased how we can model the composition of computations using an Applicative and Alternative tree, and then, taking advantage of this APIs; we transformed this computation tree into something more approachable to execute concurrently. We also took advantage of this sub-routines as data approach to change the evaluator from MVar to STM compositions.

January 27, 2020 04:28 AM

January 25, 2020

Oleg Grenrus

Case study: migrating from lens to optics

Posted on 2020-01-25 by Oleg Grenrus lens, optics

As you are reading this post, you probably know that there is

  • the lens library by Edward Kmett et al. which is de facto optics library for Haskell. It's famous also for its type errors.
  • the optics library by Adam Gundry, Andres Löh, Andrzej Rybczak and myself uses different representation for optics (note: slanted optics is a concept, monospace optics is a library name). I recommend reading through the introduction in the official documentation, especially Comparison with lens section

Some time ago I commented on Reddit, that there are no real experience reports about migrating "real world Haskell codebases" from lens to optics. So I decided to do an experiment. The repository is a public Futurice Haskell code monorepository, which I worked on during my time at the Futurice. The whole codebase is a bit over 70000 lines in 800 files.

One disclaimer is that I do this for fun and out of curiosity, in other words Futurice didn't ask me to do this (fun is subjective). Another disclaimer is that my experiences shouldn't not be extrapolated into how easy or hard this kind of migration is, only that it's possible. I'm way too familiar with the codebase under "migration", as well as with lens and optics libraries. On the other hand, it turned relatively easy for me, and I share my experiences, so it would be easier for others.

Why move from lens to optics

Before discussing how, let me talk a little about why.

There may be different reasons why a team would like to migrate from lens (or no lens at all) to optics

One bad reason is trying to reduce dependency footprint. It won't. Libraries use lens, and that cannot be easily changed. The HMR consists of dozens of small web services, which use

All these libraries depend on lens. But we can use them with optics too, as I'll show later. And even we wouldn't have libraries with lens interface, the lens is there somewhere in codebases of this scale. The HMR build plan consists of over 500 components, from which about 400 are dependencies from Hackage. In fact, from this "industrial" point of view, it would be better if microlens didn't exist. It's just more duplicate code somewhere there. In the dependency closure there are e.g.

microlens-th-0.4.3.2
microlens-mtl-0.2.0.1
microlens-0.4.11.2
lens-4.17.1

There are also multiple implementations of other things, so this problem is not unique to van Laarhoven lens libraries.

My current project has about 450 components in the build plan, so this scale is not unique.

On the other hand, one proper reason to use optics could be better type errors. (I'm too experienced to judge that properly but optics at least tries to produces better errors). Another complelling reason is OverloadedLabels which just work with optics. We'll see example of that. Thanks to Andrzej Rybczak PR, swagger2 package has optics interface via OverloadedLabels (since version 2.5), and it's neat.

futurice-prelude

HMR has own prelude. The futurice-prelude package is at the bottom of the package graph, in other words everything else there uses the package and imports Futurice.Prelude module. It's a very fat prelude, re-exporting a lot of stuff. It also has a bit of auxiliary modules, which most of the downstream would need.

Having a module imported by everything else is nice, especially as this module currently re-exports the following lens definitions:

That's not much, but enough for basic optics usage. We'll change the exports to use optics:

and then continue fixing type errors, as usually with refactoring in Haskell.

Careful reader may notice that operators are not exported from the main Optics module. If you don't like them, it's easier to not import them. The & and <&> operators are available directly from modules in base, so we import them from there.

There are few missing or different bits in the optics imports:

Similarly to lazy and strict there is packed and unpacked, but migrating those is very trivial:

Very important is to remember to re-export %, the optics composition operator.

In addition we also export

  • simple and castOptic combinators, we'll need them soon. See optics#286 issue. In optics the identity optic is not id.

  • lensVL and traversalVL which help convert van Laarhoven lenses (from external libraries) to Optics representation.

  • traverseOf: sometimes we use van Laarhoven lenses directly as traversals, that trick doesn't play with optics. For the similar reason we export traversed: traverse is not an Optic.

One thing, which I would liked to be able to do is to re-export lens module qualified, something like:

It would helped with this migration, but also help to re-export Data.Map and Data.ByteString etc. that would be bery nice for fat preludes.

This is it, about the prelude. Next I'll list various issues I run into when trying to make a single service in HMR to compile. I picked a particular one, which I remember uses lens a lot for business domain stuff.

To conduct this experiment I:

  1. Added optimization: False to cabal.project.local to make compilation faster
  2. cabal build checklist-app:libs
  3. On error ghcid -c 'cabal repl failing-lib', fix errors in that package
  4. go to step 2.

Individual issues

Couldn't match type ‘optics-core-0.2:Optics.Internal.Optic.Optic

This is by far the most common type of errors:

The important bit is

That means that we need to replace dot operator . with the optics composition combinator %. Another variation on the same is

which happens when you forgot to change few dots.

swagger2

As already mentioned, swagger2 has optics interface; the needed instances are defined in Data.Swagger.Optics module. I imported it into Orphans module of futurice-prelude, to make the instances available everywhere.

The most changes are then of following flavor:

I honestly just used regexps: s/& Swagger./\& #/, and s/type_/type/.

In one place I had to add a type annotation, as the type become ambiguous. This is not necessarily a bad thing.

That was very straight-forward, as there is optics support in the swagger2 library now. Thanks Andrzej!

gogol and amazonka and Chart

Unfortunately gogol and amazonka don't have optics support, as far as I know. Both libraries use lens to provide getters and setters to Google Cloud and AWS domain types.

For the code which operates with the libraries directly (which is relatively few modules, API is encapsulated), I import qualified Control.Lens as L and prefix operators with L..

The resulting code is a slight mess, as you could have both ^. from optics, and L.^. from lens. That's not nice even it works.

A solution would be to invest a day and define a bunch of LabelOptic instances for gogol, amazonka and Chart types. One could drop prefixes at the same time. I'd probably write some Template Haskell to automate that task (TH would be more fun than typing all the definitions by hand). Something like:

or maybe even reify module contents, look for lenses and do even more magical thing. It would be more uniform, than doing it by hand. AWS and GoogleCloud APIs are huge, and it's hard to know what part you will need next.

amazonka and gogol libraries are code generated, so one other option is to hack the generator to produce an optics packages too.

Chart is not much different API lens-wise, though smaller. There the manual creation of Chart-optics might be viable.

percent operator

In one place the HMR actually used Rational, and the Rational smart constructor %. In that module the optics weren't used, so I went with hiding ((%)). Maybe I'd defined rational named constructor if I needed it more.

lens-aeson

In one library integrating with peculiar JSON API, Futurice uses lens-aeson, luckily there's also aeson-optics so the that issue is fixed by changing the import.

In that same spot we run into other difference between optics and lens: ^. in optics really wants a getter. You cannot give it a Prism like _String. lens behavior was (ab)used to return mempty i.e. empty Text on non-match. I think using foldOf is better, though a combinator requiring AffineFold would been even better (i.e. fold resulting at most one value, the non-match could be still mempty, or def from Default).

Similarly in some places ^? were used with folds to get the first value. There headOf is semantically more correct.

By the way, we debated whether we should renamed more stuff for optics, but settled to keep the names mostly the same. It really shows in the migration overall. Many things just work out of the box, after changing the imports.

Representable

futurice-prelude defines a following type-class:

It's useful when working with Representable containers, where Rep f, i.e. index is some enumerable type. For example:

which can be indexable (representble?) by DayOfWeek. One can use PerDayOfWeek to aggregate data per day of the week. I find that quite common need.

One could directly convert that class to use optics, but that is unnecessary. optics' ix combinator could be stronger than default AffineTraversal, for example a Lens for Representable containers. In my opinion this simplifies code, as one uses the same combinator -- ix -- to index Maps and the types like PerDayOfWeek.

where gix derives the Lens generically.

Classy lenses

HMR has also plenty classy lens type-classes, defined manually. There the identity instance needs to be implemnted as castOptic simple:

Another alternative would be to the scratch whole class, and rather define

and use OverloadedLabels.

Small usage of lensVL

In few places HMR uses lens interface to libraries, which don't have an optics counterpart yet. That's where one could selectively sprinkle lensVL:

(The library in question is generics-sop-lens, and in fact there is an unreleased optics variant: optics-sop)

Another use-case for lensVL is when some lenses are defined manually, directly using their van Laarhoven encoding:

Here we move things a bit and insert lensVL in one place, thus avoiding rewriting the actual code of the lens.

Custom combinators consuming optics

One may notice that optics doesn't have types like Getting, ALens or ATraversal. That is because optics types aren't RankNTypes, so Lens works as well as ALens. Almost.

You may need to do rewrites similar to the following one:

This is not nice, but that's what we have in optics. (Not that Endo [a] is nice either).

One could write also

but that would accept only a Getter (same for lens IIRC), thus we need to jump through a hoop to make more usable definitions.

No IndexedFold

This a thing which is renamed in optics, indexed variants are named more shortly, in this case you are looking for IxFold. The indexed optics are well supported in optics, and though they are slightly different in their interface, I didn't run into show stopper problems.

withIndex

There are no withIndex in optics. In HMR it was used in a following snippet:

Let's try to understand what happens there. We fold with index, drop down the index into value, map over the index, and as we have (newIndex, value) we make an indexed fold with the new index.

Why not just remap index directly using reindexed:

Turns out we don't need withIndex here. One could use migration to optics as an opportunity to make a code review scan of their codebase.

Note the composition-like combinator, %&, it allows to post-process optics with "optics transformers", which aren't optics themselves, like reindexed. % and %& are left associative (. is right associative), so the above ifolded &% reindexed EnumValue would work as a part of a bigger optic too.

ALens

In one module the ALens type was used, with its special operators #~, ^# etc. You sometimes need to take a whole ALens in, if you both view and set. With optics one can use Lens with .~ and ^..

And that's the last noteworthy issue. After a while, everything compiles and even tests pass on the first run.

Conclusion

I hope that this (almost) "real case study" illustrates some issues which one could run into when moving to use optics, in old or new codebase.

It's worth mentioning that the most tricky issues were inside the auxiliary libraries. The core domain logic, with types which had optics build with makeLenses require mostly only changing . to %.

The biggest obstacle are libraries which have lens i.e. van Laarhoven encoding interfaces. Yet, creating optics interface for them in advance is not insurmountable task.

Whether optics is better for your team is to your team to decide, optics are suitable for "real world haskell".

January 25, 2020 12:00 AM

January 20, 2020

Monday Morning Haskell

Nicer Package Organization with Stack!

stack_packages.jpg

In last week's article we explored Haskell package management using Cabal. This tool has been around for a while and serves as the backbone for Haskell development even to this day. We explored the basics of this tool, but also noticed a few issues. These issues centered around dependency management, and what happens when package versions conflict.

Nowadays, most Haskell developers prefer the Stack tool to Cabal. Stack still uses many of the features of Cabal, but adds an extra layer, which helps deal with the problems we saw. In this article, we'll do a quick overview of the Stack tool and see how it helps.

For a more in depth look at Stack, you should take our free Stack mini-course . While you're at it, you can also download our Beginner Checklist. This will help ensure you're up to speed with the basics of Haskell.

Creating a Stack Project

Making a new project with Stack is a little more streamlined than with Cabal by itself. To start with, we don't need the extra step of creating our project directory before hand. The stack new command handles creating this directory.

>> stack new MyStackProject
>> cd MyStackProject

By default, Stack will also generate some basic source files for our project. We get a library file in src/Lib.hs, an executable program in app/Main.hs, and a test in test/Spec.hs. It also lists these files in the .cabal file. So if you're newer to using Haskell, it's easier to see how the file works.

You can use different templates to generate different starter files. For example, the servant template will generate boilerplate for a Servant web server project. The .cabal file includes some dependencies for using Servant. Plus, the generated starter code will have a very simple Servant server.

>> stack new MyStackProject servant

Basic Commands

As with Cabal, there are a few basic commands we can use to compile and run our Haskell code with Stack. The stack build command will compile our library and executables. Then we can use stack exec with an executable name to run that executable:

stack exec run-project-1
stack exec run-project-2

Finally, we can run stack test to run the different test suites in our project. Certain commands are actually variations on stack build, with different arguments. In these next examples, we run all the tests with --test, and then run a single test suite by name.

>> stack build --test
>> stack build MyStackProject:test:test-suite-1

Stack File

Stack still uses Cabal under the hood, so we still have a .cabal file for describing our package format. But, we also have another package file called stack.yaml. If you look at this file, you'll see some new fields. These fields provide some more information about our project as a whole. This information will help us access dependencies better.

The resolver field tells us which set of packages we're using from Stackage. We'll discuss this more later in the article.

resolver: lts-13.19

Then the packages field gives a list of the different packages in our project. Stack allows us to manage multiple packages at once with relative ease compared to Cabal. Each entry in this list refers to a single directory path containing a package. Each individual package has its own .cabal file.

packages:
  - ./project-internal
  - ./project-parser
  - ./project-public

Then we also see a field for extra-deps. This lists dependency packages outside of our current resolver set. By default, it should be empty. Again, we'll explore this a bit later once we understand the concept of resolvers better.

Installing Packages

Besides the basic commands, we can also use stack install. But its functionality is a bit different from cabal install. If we just use stack install by itself, this will "install" the executables for this project on our local path. Then we can run them from any directory on our machine.

>> stack install
...
Copied executables to /home/username/.local/bin/
- run-project-1
- run-project-2
>> cd ..
>> run-project-1
"Hello World!"

This is different from the way we installed dependency packages using cabal install. We could use stack install in a similar way. But this is more for different Haskell programs we want to use. For example, we can install the hlint code linter like so:

stack install hlint

But unlike with vanilla Cabal, it's unnecessary to do this with dependency packages! Using stack build installs dependencies for us! We can add a dependency (say the split package), and build our code without a separate install command!

This is one advantage we get from using Stack, but on its own it still seems small. Let's start looking at the real benefits from Stack when it comes to dependency conflicts.

Cross-Project Conflicts

Recall what happened when using different versions of a package on different projects on our machine. We encountered a conflict, since the global index could only have one version of the package. We solved this with Cabal by using sandboxes. A sandbox ensured that a project had an isolated location for its dependencies. Installing packages on other projects would have no effect on this sandbox.

Stack solves this problem as well by essentially forcing us to use sandboxing. It is the default behavior. Whenever we build our project, Stack will generate the .stack-work directory. This directory contains all our dependencies for the project. It also stores our compiled code and executables.

So with Stack, you don't have to remember if you already initialized the sandbox when running normal commands. You also don't have to worry about deleting the sandbox on accident.

Dependency Conflicts

Sandboxing solves the issue of inter-project dependency conflicts. But what about within a project? Stack's system of resolvers is the solution to these. You'll see a "resolver version" in your stack.yaml file. By default, this will be the latest lts version at the time you make your project. Essentially, a resolver is a set of packages that have no conflicts with each other.

Most Haskell dependencies you use live on the Hackage repository. But Stack adds a further layer with Stackage. A resolver set in Stackage contains many of the most common Haskell libraries out there. But there's only a single version of each. Stack maintainers have curated all the resolver sets. They've exhaustively checked that there are no dependency conflicts between the package versions in the set.

Here's an example. The LTS-14.20 resolver uses version 1.4.6.0 of the aeson library. All packages within the resolver that depend on this library will be compatible with this version.

So if you stick to dependencies within the resolver set, you won't have conflicts! This means you can avoid the manual work of finding compatible versions.

There's another bonus here. Stack will determine the right version of the package for us. So we generally don't need version constraints in our .cabal files. We would just list the dependency and Stack will do the rest.

library
  exposed-modules: Lib
  build-depends:
      base
    , split
  ...

You'll generally want to stick to "long term support" (lts) resolvers. But there are also nightly resolvers if you need more bleeding edge libraries. Each resolver also corresponds to a particular version of GHC. So Stack figures out the proper version of the compiler your project needs. You can also rest assured that Stack will only use dependencies that work for that compiler.

Adding Extra Packages

While it's nice to have the assurance of non-conflicting packages, we still have a problem. What if we need Haskell code that isn't part of the resolver set we're using? We can't expect the Stack curators to think of every package we'll ever need. There's a lot of code on Github we could use. And indeed, even many libraries on Hackage are not in a Stackage resolver set.

We can still import these packages though! This is what the extra-deps field of stack.yaml is there for. We can add entries to this list in a few different formats. Generally, if you provide a name and a version number, Stack will look for this library on Hackage. You can also provide a Github repository, or a URL to download. Here are a couple examples:

extra-deps:
  - snappy-0.2.0.2
  - git: https://github.com/tensorflow/haskell.git
    commit: d741c3ee59a5c9569fb2026bd01c1c8ff22fa7c7

Note that you'll still have to add the package name as a dependency in your .cabal file. You can omit the version number though, as we did with other packages above.

Often, using one package as an "extra dependency" will then require other packages outside the resolver set. Getting all these can be a tedious process. But you can usually use the stack solver command to get the full list of packages (and versions) that you need.

Unfortunately, when you introduce extra dependencies you're no longer guaranteed of avoiding conflicts. But if you do encounter conflicts, you have a clear starting point. It's up to you to figure out a version of the new package you can use that is compatible with your other dependencies. So the manual work is much more limited.

Conclusion

That wraps up our look at Stack and how it solves some of the core problems with using Cabal by itself. Next week, we'll explore the hpack tool, which simplifies the process of making our .cabal file. In a couple weeks, we'll also take a look at the Nix package manager. This is another favorite of Haskell developers!

by James Bowen at January 20, 2020 03:30 PM

Michael Snoyman

The Warp Executable

I recently reinstalled the OS on my laptop and very quickly ran into:

$ warp
-bash: warp: command not found

This made me realize just how frequently I use the warp executable in my day-to-day life, and decided to write a quick post about it.

How to install it

The executable may be available in your package manager. However, I recommend building/installing it with Stack:

  1. Install Stack, with curl -sSL https://get.haskellstack.org/ | sh or the Windows installer
  2. Run stack install wai-app-static, or stack install wai-app-static --resolver lts-14.20 to be a bit more pedantic
  3. There is no step 3

If you don’t have a Haskell toolchain set up, this will download a bunch of stuff and may take a bit. Sorry.

What it does

The Warp executable provides a simple file server. My most usually way of calling it is without any arguments, where it will serve the files in the current directory on port 3000. The two most common options I’ll pass it are:

$ warp -d some-directory # serve from that directory
$ warp -p 8080 # serve on a different port

You can use warp --help for a full listing of options, though there aren’t many.

Why it’s useful

Simply: no config file. You can set up a dedicated file server using nginx or others, but that takes much more effort. Often times, I want to simply generate some HTML and then view it on my phone, for example. This works out perfectly.

Along those lines, I highly recommend checking out ngrok, which will provide a temporary public URL to a locally running service. This can be a great way to share a locally running site with someone else. It’s pretty common for me to have warp running in one terminal and ngrok http 3000 in another.

Also: Servius

I’ve considered writing a blog post about Servius many times before, but there’s never enough info to warrant a dedicated post. So now’s a good time to mention it. Servius is a souped-up version of the Warp executable. In addition to serving static files, it has support for rendering Markdown (using Github-flavored CommonMark). I’ll often use Servius instead of Warp when I’m drafting a blog post like this and want to check it out quickly.

Want to install Servius? Just run stack install servius or stack install servius --resolver lts-14.20.

Other tools?

One more shout-out: I really like the bat tool, aka “a cat clone with wings.” To get it, install Rust and run cargo install bat, or check out the many other installation options. I enjoy getting syntax highlighting and paging when I want to look at files, without having to pop them open in vim.

I may expand this list over time :)

January 20, 2020 04:23 AM

January 18, 2020

Chris Smith 2

Area and Volume Models for Algebra

Yesterday, I spent some time writing down a plausible path for rediscovering the Pythagorean theorem, and the algebra steps at the end got me started illustrating more algebraic identities with area and volume models.

Let’s start with some basic properties of multiplication:

The commutative property, expressed in an area model, just says that the area of a rectangle stays the same when it’s turned sideways!

The associative property, involving the product of three numbers, requires a volume model rather than an area model. We can draw each box like a loaf of sliced bread, and the associative property tells us that the volume stays the same no matter which direction it’s sliced.

The distributive property is the first that involves addition. A sum is represented by just lining up two shapes side-by-side. Viewed this way, the distributive property allows us to cut a rectangle without changing its area.

These are fairly trivial, but the same can be applied to more interesting examples. Perhaps the most common non-trivial area model came up in the last article mentioned earlier. Here it is.

A minor variant on this formula, which is taught separately in some textbooks, is the square of a difference. This is a little trickier to draw, but it’s a great introduction to subtraction.

Even trickier yet is the difference of squares, and I haven’t figured out how to draw it without motion in the picture.

Volume models can also be used for non-trivial identities with 3rd-degree polynomials, such as the cube of a sum pictured below. Unfortunately, I was unable to find a two-dimensional layout that communicates this well in a still drawing, so I’ve instead produced an animation using CodeWorld, the math and computer science learning environment I’ve built (though this is more advanced code than I’d expect from any of my middle school students!) If you click the link for the animated version, you can pick out the eight parts corresponding to the expanded form.

Illustration of (a+b)³ = a³ + 3a²b + 3ab² + b³, as a volume model. Click below for animation

CodeWorld

I haven’t had the chance to think about models of more interesting 3rd-degree expressions, such as the sum of cubes: a³ + b³ = (a + b)(a² + b² - ab). If you have ideas for modeling these, I’d love to see them.

by Chris Smith at January 18, 2020 07:39 PM

January 17, 2020

Mark Jason Dominus

Pylgremage of the Sowle

As Middle English goes, Pylgremage of the Sowle (unknown author, 1413) is much easier to read than Chaucer:

He hath iourneyed by the perylous pas of Pryde, by the malycious montayne of Wrethe and Enuye, he hath waltred hym self and wesshen in the lothely lake of cursyd Lechery, he hath ben encombred in the golf of Glotony. Also he hath mysgouerned hym in the contre of Couetyse, and often tyme taken his rest whan tyme was best to trauayle, slepyng and slomeryng in the bed of Slouthe.

I initially misread “Enuye” as “ennui”, understanding it as sloth. But when sloth showed up at the end, I realized that it was simpler than I thought, it's just “envy”.

by Mark Dominus (mjd@plover.com) at January 17, 2020 03:27 PM

Gabriel Gonzalez

Why Dhall advertises the absence of Turing-completeness

total2

Several people have asked why I make a big deal out of the Dhall configuration language being “total” (i.e. not Turing-complete) and this post will summarize the two main reasons:

  1. If Dhall is total, that implies that the language got several other things correct

  2. “Not Turing-complete” is a signaling mechanism that appeals to Dhall’s target audience

“Because of the Implication”

The absence of Turing completeness per se does not provide many safety guarantees. Many people have correctly noted that you can craft compact Dhall functions that can take longer than the age of the universe to evaluate. I even provide a convenient implementation of the Ackermann function in Dhall to make it as easy as possible for people to foil the interpreter:

However, a total language like Dhall needs to get several other things correct in order to be able to guarantee that the language is not Turing complete. There are multiple ways you can eliminate Turing-completeness from a language, but nearly all of them improve the language in some way.

For example, the way Dhall eliminates Turing-completeness is:

  • Eliminating general recursion

    … which protects against common mistakes that introduce infinite loops

  • Having a strong type system

    … especially one with no escape hatches for reintroducing general recursion

  • Forbidding arbitrary side effects

    … which can also be another way to backdoor general recursion into a language

These three features are widely viewed as good things in their own right by people who care about language security, regardless of whether they are employed in service of eliminating Turing-completeness.

In other words, Turing-completeness functions as a convenient “umbrella” or “shorthand” for other safety features that LangSec advocates promote.

Shibboleth

According to Wikipedia a shibboleth is:

… a custom or tradition, usually a choice of phrasing or even a single word that distinguishes one group of people from another. Shibboleths have been used throughout history in many societies as passwords, simple ways of self-identification, signaling loyalty and affinity, maintaining traditional segregation, or protecting from real or perceived threats.

The phrase “not Turing-complete” is one such shibboleth. People who oppose the use of general-purpose programming languages for configuration files use this phrase as a signaling mechanism. This choice of words communicates to like-minded people that they share the same values as the Dhall community and agree on the right balance between configuration files being data vs. being programs.

If you follow online arguments about programmable configuration files, the discussion almost invariably follows this pattern:

  • Person A: “Configuration files should be inert so that they are easier to understand and manipulate”
  • Person B: “Software enginering practices like types and DRY can prevent config-induced production outages. Configs should be written in a general-purpose programming language.”
  • Person A: “But configuration files should not be Turing-complete!”

Usually, what “Person A” actually meant to say was something like:

  • configuration files should not permit arbitrary side effects
  • configuration files should not enable excessive indirection or obfuscation
  • configuration files should not crash or throw exceptions

… and none of those desires necessarily imply the absence of Turing-completeness!

However, for historical reasons all of the “Person A”s of the world rallied behind the absence of Turing-completeness as their banner. When I advertise that Dhall is not Turing-complete I’m signaling to them that they “belong here” with the rest of the Dhall community.

Conclusion

In my view, those two points make the strongest case for not being Turing complete. However, if you think I missed an important point just let me know.

by Gabriel Gonzalez (noreply@blogger.com) at January 17, 2020 02:43 PM

January 16, 2020

Mark Jason Dominus

A serious proposal to exploit the loophole in the U.S. Constitution

In 2007 I described an impractical scheme to turn the U.S. into a dictatorship, or to make any other desired change to the Constitution, by having Congress admit a large number of very small states, which could then ratify any constitutional amendments deemed desirable.

An anonymous writer (probably a third-year law student) has independently discovered my scheme, and has proposed it as a way to “fix” the problems that they perceive with the current political and electoral structure. The proposal has been published in the Harvard Law Review in an article that does not appear to be an April Fools’ prank.

The article points out that admission of new states has sometimes been done as a political hack. It says:

Republicans in Congress were worried about Lincoln’s reelection chances and short the votes necessary to pass the Thirteenth Amendment. So notwithstanding the traditional population requirements for statehood, they turned the territory of Nevada — population 6,857 — into a state, adding Republican votes to Congress and the Electoral College.

Specifically, the proposal is that the new states should be allocated out of territory currently in the District of Columbia (which will help ensure that they are politically aligned in the way the author prefers), and that a suitable number of new states might be one hundred and twenty-seven.

by Mark Dominus (mjd@plover.com) at January 16, 2020 05:53 PM

Ken T Takusagawa

[mkbewoae] String to PRNG

Here is a Haskell function that seeds a tf-random pseudo random number generator with a String.  Unlike previously when we used PBKDF2, this time we use straight unsalted unstretched SHA-256 because this is not intended for cryptographic application (consistent with the caveat given in the documentation to tf-random).  Also, this time we use cereal instead of binary to avoid lazy ByteStrings.

We do Unicode normalization because we want different representations of the same String to result in the same random number generator.  We chose normalization method NFC arbitrarily.

It's cool to glue functions from about 5 different packages together so nicely.

For pedagogical purposes, we provide many type annotations, and we also often use fully qualified package names.

Data.Serialize.decode "just works" to unpack a 256-bit (32 byte) ByteString into a (Word64, Word64, Word64, Word64) tuple, the seed type for TFGen, because there are built-in decoders for Word64 and (a,b,c,d).  Neither have metadata.  It happens to do big-endian for Word64, but we do not care about endianness for this application, so long as it is consistent.

{-# LANGUAGE PackageImports #-}
import Control.Category((>>>));
import Prelude hiding((.),(>>)); --optional
import qualified System.Random.TF;
import qualified Data.Text;
import qualified Data.Text.Encoding;
import qualified Data.Text.ICU;
import qualified "cryptohash-sha256" Crypto.Hash.SHA256; -- PackageImports
import qualified Data.Serialize;
import qualified Data.ByteString as Strict;
import qualified Data.Either.Combinators;
import Data.Word(Word64);

rnginit :: String -> System.Random.TF.TFGen;
rnginit = (Data.Text.pack :: String -> Data.Text.Text)
>>> (Data.Text.ICU.normalize Data.Text.ICU.NFC :: Data.Text.Text -> Data.Text.Text)
>>> (Data.Text.Encoding.encodeUtf8 :: Data.Text.Text -> Strict.ByteString)
>>> (Crypto.Hash.SHA256.hash :: Strict.ByteString -> Strict.ByteString)
>>> (Data.Serialize.decode :: Strict.ByteString -> Either String (Word64,Word64,Word64,Word64))
>>> (Data.Either.Combinators.fromRight' :: Either String (Word64,Word64,Word64,Word64) -> (Word64,Word64,Word64,Word64))
>>> (System.Random.TF.seedTFGen :: (Word64,Word64,Word64,Word64) -> System.Random.TF.TFGen);

Here's a quick test, rolling a d6 die 10 times:

*Main> take 10 $ System.Random.randomRs (1,6) (rnginit "foo")
[5,6,1,6,5,2,2,2,3,1]

Connecting the random number generator to the random-fu package gives us access to many more probability distributions.  random-fu provides entropy source instances (MonadRandom and RandomSource) for StdGen but not general RandomGen (but this seems to be being worked on).  We thought of writing an instance for TFGen, but because of the way instances in Haskell work, if anyone else in the universe (only a bit of an exaggeration) also ever writes an instance for TFGen, the instances will overlap or conflict -- there's no way to mask someone else's instance.

Therefore, we instead use the GetPrim hook to locally create a wrapped Control.Monad.State.Strict monad to be a RandomSource.  Fortunately, getRandomPrimFromRandomGenState is polymorphic over RandomGen.

{-# LANGUAGE ScopedTypeVariables #-}
import Data.Function((&));
import Data.RVar(runRVar,RVar);
import Control.Monad.State.Strict(evalState,State); -- lazy state monad also works
import Data.Random.Internal.Source(GetPrim(GetPrim));
import Data.Random.Source.StdGen(getRandomPrimFromRandomGenState);
import qualified System.Random.TF as TF;

-- at least one of the GetPrim type annotations is required
rgen :: forall a . RVar a -> TF.TFGen -> a;
rgen r gen = (GetPrim getRandomPrimFromRandomGenState :: GetPrim (State TF.TFGen))
& (runRVar r :: GetPrim (State TF.TFGen) -> State TF.TFGen a)
& ((\state -> evalState state gen) :: State TF.TFGen a -> a);

stringr :: RVar a -> String -> a;
stringr r = rnginit >>> rgen r;

Here's a quick test, generating a standard normal deviate from a TFGen initialized with the string "foo":

*Main> stringr (stdNormal :: RVar Double) "foo"
-2.124934126731346

by Unknown (noreply@blogger.com) at January 16, 2020 05:05 AM

Chris Smith 2

Nice article.

Nice article. I found this when Medium suggested it as a follow-up to what I wrote today at https://medium.com/@cdsmithus/your-students-could-have-invented-the-pythagorean-theorem-438db433aec5. I’ve had success using this proof as a plausible path for guided discovery, rather than just a proof after stating the theorem.

by Chris Smith at January 16, 2020 04:11 AM

Tweag I/O

A Tale of Two Functors or: How I learned to Stop Worrying and Love Data and Control

Arnaud Spiwack

Haskell's Data and Control module hierarchies have always bugged me. They feel arbitrary. There's Data.Functor and Control.Monad—why? Monads are, after all, functors. They should belong to the same hierarchy!

I'm not that person anymore. Now, I understand that the intuition behind the Data/Control separation is rooted in a deep technical justification. But—you rightly insist—monads are still functors! So what's happening here? Well, the truth is that there are two different kinds of functors. But you could never tell them apart because they coincide in regular Haskell.

But they are different—so let's split them into two kinds: data functors and control functors. We can use linear-types to show why they are different. Let's get started.

Data functors

If you haven't read about linear types, you may want to check out Tweag's other posts on the topic. Notwithstanding, here's a quick summary: linear types introduce a new type a ⊸ b of linear functions. A linear function is a function that, roughly, uses its argument exactly once.

With that in mind, let's consider a prototypical functor: lists.

instance Functor [] where
  fmap :: (a -> b) -> [a] -> [b]
  fmap f [] = []
  fmap f (a:l) = (f a) : (fmap f l)

How could we give it a linear type?

  • Surely, it's ok to take a linear function as an argument (if fmap works on any function, it will work on functions which happen to be linear).
  • The f function is, on the other hand, not used linearly: it's used once per element of a list (of which there can be many!). So the second arrow must be a regular arrow.
  • However, we are calling f on each element of the list exactly once. So it makes sense to make the rightmost arrow linear—exactly once.

So we get the following alternative type for list's fmap:

fmap :: (a ⊸ b) -> [a] ⊸ [b]

List is a functor because it is a container of data. It is a data functor.

class Data.Functor f where
  fmap :: (a ⊸ b) -> [a] ⊸ [b]

Some data functors can be extended to applicatives:

class Data.Applicative f where
  pure :: a -> f a
  (<*>) :: f (a ⊸ b) ⊸ f a ⊸ f b

That means that containers of type f a can be zipped together. It also constrains the type of pure: I typically need more than one occurrence of my element to make a container that can be zipped with something else. Therefore pure can't be linear.

As an example, vectors of size 2 are data applicatives:

data V2 a = V2 a a

instance Data.Functor f where
  fmap f (V2 x y) = V2 (f x) (f y)
  
instance Data.Applicative f where
  pure x = V2 x x
  (V2 f g) <*> (V2 x y) = V2 (f x) (g y)

Lists would almost work, too, but there is no linear way to zip together two lists of different sizes. Note: such an instance would correspond to the Applicative instance of ZipList in base, the Applicative instance for [], in base is definitely not linear (left as an exercise to the reader).

Control functors

The story takes an interesting turn when considering monads. There is only one reasonable type for a linear monadic bind:

(>>=) :: m a ⊸ (a ⊸ m b) ⊸ m b

Any other choice of linearization and you will either get no linear values at all (if the continuation is given type a -> m b), or you can't use linear values anywhere (if the two other arrows are non-linear). In short: if you want the do-notation to work, you need monads to have this precise type.

Now, you may remember that, from (>>=) alone, it is possible to derive fmap:

fmap :: (a ⊸ b) ⊸ m a ⊸ m b
fmap f x = x >>= (\a -> return (f a))

But wait! Something happened here: all the arrows are linear! We've just discovered a new kind of functor! Rather than containing data, we see them as wrapping a result value with an effect. They are control functors.

class Control.Functor m where
  fmap :: (a ⊸ b) ⊸ m a ⊸ m b

class Control.Applicative m where
  pure :: a ⊸ m a  -- notice how pure is linear, but Data.pure wasn't
  (<*>) :: m (a ⊸ b) ⊸ m a ⊸ m b

class Control.Monad m where
  (>>=) :: (>>=) :: m a ⊸ (a ⊸ m b) ⊸ m b

Lists are not one of these. Why? Because you cannot map over a list with a single use of the function! (Neither is Maybe because you may drop the function altogether, which is not permitted either.)

The prototypical example of a control functor is linear State

newtype State s a = State (s ⊸ (s, a))

instance Control.Functor (State s) where
  fmap f (State act) = \s -> on2 f (act s)
    where
      on2 :: (a ⊸ b) ⊸ (s, a) ⊸ (s, b)
      on2 g (s, a) = (s, g b)

Conclusion

There you have it. There indeed are two kinds of functors: data and control.

  • Data functors are containers: they contain many values; some are data applicatives that let you zip containers together.
  • Control functors contain a single value and are all about effects; some are monads that the do-notation can chain.

That is all you need to know. Really.

But if you want to delve deeper, follow me to the next section because there is, actually, a solid mathematical foundation behind it all. It involves a branch of category theory called enriched category theory.

Either way, I hope you enjoyed the post and learned lots. Thanks for reading!

Appendix: The maths behind it all

Briefly, in a category, you have a collection of objects and sets of morphisms between them. Then, the game of category theory is to replace sets in some part of mathematics, by objects in some category. For example, one can substitute “set” in the definition of group by topological space (topological group) or by smooth manifold (Lie group).

Enriched category theory is about playing this game on the definition of categories itself: a category enriched in \(\mathcal{C}\) has a collection of objects and objects-of-\(\mathcal{C}\) of morphisms between them.

For instance, we can consider categories enriched in abelian groups: between each pair of objects there is an abelian group of morphisms. In particular, there is at least one morphism, 0, between each pair of objects. The category of vector spaces over a given field (and, more generally, of modules over a given ring) is enriched in abelian groups. Categories enriched in abelian groups are relevant, for instance, to homology theory.

There is a theorem that all symmetric monoidal closed categories (of which the category of abelian groups is an example) are enriched in themselves. Therefore, the category of abelian groups itself is another example of a category enriched in abelian groups. Crucially for us, the category of types and linear functions is also symmetric monoidal closed. Hence is enriched in itself!

Functors can either respect this enrichment (in which case we say that they are enriched functors) or not. In the category Hask (seen as a proxy for the category of sets), this theorem is just saying that all functors are enriched because “Set-enriched functor” means the same as “regular functor”. That's why Haskell without linear types doesn't need a separate enriched functor type class.

In the category of abelian groups, the functor which maps \(A\) to \(A\otimes A\) is an example of a functor which is not enriched: the map from \(A → B\) to \(A\otimes A → B\otimes B\), which maps \(f\) to \(f\otimes f\) is not a group morphism. But the functor from \(A\) to \(A\oplus A\) is.

Control functors are the enriched functors of the category of linear functions, while data functors are the regular functors.

Here's the last bit of insight: why isn't there a Data.Monad? The mathematical notion of a monad does apply perfectly well to data functors—it just wouldn't be especially useful in Haskell. We need the monad to be strong for things like the do-notation to work correctly. But, as it happens, a strong functor is the same as an enriched functor, so data monads aren't strong. Except in Hask, of course, where data monads and control monads, being the same, are, in particular, strong.

January 16, 2020 12:00 AM

January 15, 2020

Chris Smith 2

Your students could have invented… the Pythagorean theorem

Historically, the Pythagorean theorem has been taught as a memorized formula. It has become so emblematic that Gilbert and Sullivan lampooned it as an example of arcane but impractical knowledge. Fortunately, things are improving a bit. Students in modern mathematics classes are likely to at least be shown a proof of the theorem rather than expected to memorize it.

I want to do better. Here’s a narrative I have refined and used with some success for leading 7th or 8th grade students along the whole journey of discovery, discovering and understanding the key ideas behind the Pythagorean theorem. The goal is not just to know or use the formula, but to leave them feeling that they could have come up with it on their own.

You will likely recognize this as a well-known and familiar proof. What I want to focus on here, though, is not the argument as a proof of the formula after it’s been stated, but as a plausible path of discovery.

Step 1: Framing the problem

The first step to a good journey of discovery is to have an interesting and important question. This is the first place where I think we often get off on the wrong foot. I’ve seen students bewildered by the sudden concern for right triangles and especially their hypotenuses. These terms are abstractions, and abstraction should be justified before it is used.

Most students by this point, though, have internalized the notion of the coordinate plane, and points as x and y coordinates on the plane. A question that feels much more authentic and important to them is finding the distance between two points. So I start there. Here are two points on the coordinate plane. How far apart are they?

Left to their own devices, students will often attempt to measure the line, either precisely or by just eyeballing it. That’s fine, but they should be aware that their answer is an estimate, not an exact solution. Even careful measurement with a ruler can only promise that the result is close. If asked to reason about an exact answer, most students will eventually compute the distance separately in the x and y dimensions. They will often see that the two-dimensional distance is greater than the distance in either single dimension, but less than their sum because the diagonal is a shortcut. These are great things to notice, but students will then get stuck.

Step 2: Introduce area as an intermediate step

Since the question is happening in two dimensions, it turns out to be useful to think about areas instead of distance. This is the single greatest insight in this whole exercise! All the rest is pretty straight-forward reasoning, but the recasting of a question about distance into a question about area is the revolutionary concept here. So it’s worth taking slowly.

This is a good time to quickly review the relationship between the area of a square and the length of its side.

Students will already know this, of course, so it’s not necessary to spend too much time here, but you want students to think about squaring and square roots as inverse conversions between side-length and area of a square, so that if they know one, they can easily compute the other.

With this in mind, your next task is to draw a square with side length equal to the distance between those points. With enough foresight to leave room, you can do this on the same coordinate plane you started with.

If you have the time, you might pause to ask students for ways to determine the area of this square. As soon as they aren’t making progress, you can lift the veil by drawing a larger square…

Step 3: Add more structure

Here’s the picture that will get many of your students squirming to answer the question:

For most students, the strategy will now be in reach. They will calculate the area of the larger square — in this case, 17², or 289 — and then subtract the areas of the four colored triangles — in this case, 30 each, for a total of 120. The difference — in this case, 169 — is the area of the white square in the center. They should then be able to perform a square root to get the original distance that they wanted — in this case, 13.

This doesn’t look much like the Pythagorean theorem. That’s okay! This is an answer that your students really understand, and you can work out the next steps once their understanding is reinforced. At this point, I would ask them to work through an example together with me, and then on their own or with a partner. While most students can follow the reasoning and do the area calculation once the diagram is drawn, drawing the diagram itself for a pair of (x, y) coordinates takes them a few attempts.

Step 4: Simplify

To work toward the more conventional Pythagorean theorem, one can start with the diagram above, and shift the triangles around.

With luck (or shrewd planning), your students will have seen this picture before. It is just the visual approach to squaring a sum: (a + b)² = a² + 2ab + b². Since they only want the white portion, they can calculate it as a² + b². Now you’re done.

I love how the simplification can be done in two ways here, with algebraic expressions or by sliding around triangles in the diagram. The original strategy of computing the larger area and then subtracting off the triangles was computing (a + b)² - 2ab. Sliding the triangles corresponded to applying the distributive property, so that the 2ab terms cancel, leaving a² and b² intact.

I have seen some students who, after going through this progression, still prefer to solve distance questions for a while by drawing the first picture and calculating the answer by squaring the sum and then subtracting off the triangles. That’s okay with me for a while, because it makes sense to them. Eventually, of course, I hope they will become comfortable with the conventional form, so that it will be familiar to them for more advanced mathematics.

by Chris Smith at January 15, 2020 06:30 PM

January 14, 2020

Mark Jason Dominus

More about triple border points

[ Previously ]

A couple of readers wrote to discuss tripoints, which are places where three states or other regions share a common border point.

Doug Orleans told me about the Tri-States Monument near Port Jervis, New York. This marks the approximate location of the Pennsylvania - New Jersey - New York border. (The actual tripoint, as I mentioned, is at the bottom of the river.)

I had independently been thinking about taking a drive around the entire border of Pennsylvania, and this is just one more reason to do that. (Also, I would drive through the Delaware Water Gap, which is lovely.) Looking into this I learned about the small town of North East, so-named because it's in the northeast corner of Erie County. It's also the northernmost point in Pennsylvania.

(I got onto a tangent about whether it was the northeastmost point in Pennsylvania, and I'm really not sure. It is certainly an extreme northeast point in the sense that you can't travel north, east, or northeast from it without leaving the state. But it would be a very strange choice, since Erie County is at the very western end of the state.)

My putative circumnavigation of Pennsylvanias would take me as close as possible to Pennsylvania's only international boundary, with Ontario; there are Pennsylvania - Ontario tripoints with New York and with Ohio. Unfortunately, both of them are in Lake Erie. The only really accessible Pennsylvania tripoints are the one with West Virginia and Maryland (near Morgantown) and Maryland and Delaware (near Newark).

These points do tend to be marked, with surveyors’ markers if nothing else. Loren Spice sent me a picture of themselves standing at the tripoint of Kansas, Missouri, and Oklahoma, not too far from Joplin, Missouri.

While looking into this, I discovered the Kentucky Bend, which is an exclave of Kentucky, embedded between Tennessee and Missouri:

 Missouri is mostly north of Tennessee, divided by the winding Mississippi River.  But the river makes a hairpin turn, flowing north to New Madrid, MO, and then turning sharply south again, leaving a narrow peninsula protruding north from Tennessee… Except that the swollen northern end of the peninsula is in Kentucky.  Its land border, to the south, is with Tennessee, and its river borders, all around, are with Missouri.

It appears that what happened here is that the border between Kentucky and Missouri is the river, with Kentucky getting the territory on the left bank, here the south side. And the border between Kentucky and Tennessee is a straight line, following roughly the 36.5 parallel, with Kentucky getting the territory north of the line. The bubble is south of the river but north of the line.

So these three states have not one tripoint, but three, all only a few miles apart!

Closeup of the three tripoints, all at about the same latitude, where the line crosses the winding Mississipi river in three places.

Finally, I must mention the Lakes of Wada, which are not real lakes, but rather are three connected subsets of the unit disc which have the property that every point on their boundaries is a tripoint.

by Mark Dominus (mjd@plover.com) at January 14, 2020 05:06 PM

January 13, 2020

Monday Morning Haskell

Using Cabal on its Own

cabal_packages.jpg

Last week we discussed the format of the .cabal file. This file is a "package description" file, like package.json in a Node.js project. It describes important aspects of our project's different pieces. For instance, it tells us what source files we have for our library, and what packages it depends on.

Of course this file is useless without a package manager program to actually build our code! These days, most Haskell projects use Stack for package management. If you want to jump straight into using Stack, take a look at our free Stack mini-course!

But Stack is actually only a few years old. For much of Haskell's history, Caba was the main package management tool. It's still present under the hood when using Stack, which is why Stack still uses the .cabal file. But we can also use it in a standalone fashion, as Haskell developers did for years. This week, we'll explore what this looks like. We'll encounter the problems that ultimately led to the development of Stack.

If you're still new to the Haskell language, you can also read our Liftoff Series to get a better feel for the basics!

Making a New Project

To create a new project using Cabal, you should first be in a specific directory set aside for the project. Then you can use cabal init -n to create a project. This will auto-generate a .cabal file for you with defaults in all the metadata fields.

>> mkdir MyProject && cd MyProject
>> cabal init -n
>> vim MyProject.cabal

The -n option indicates "non-interactive". You can omit that option and you'll get an interactive command prompt instead. The prompt will walk you through all the fields in the metadata section so you can supply the proper information.

>> mkdir MyProject && cd MyProject
>> cabal init
Package name? [default: MyProject]
Package version? [default: 0.1.0.0]
...

Since you only have to run cabal init once per project, it's a good idea to run through the interactive process. It will ensure you have proper metadata so you don't have to go back and fix it later.

Running this command will generate the .cabal file for you, as well as a couple other files. You'll get ChangeLog.md, a markdown file where you can record important changes. It also generates Setup.hs. You won't need to modify this simple boilerplate file. It will also generate a LICENSE file if you indicated which license you wanted in the prompt.

Basic Commands

Initializing the project will not generate any Haskell source files for you. You'll have to do that yourself. Let's suppose we start with a simple library function in a file src/Lib.hs. We would list this file under our exposed-modules field of our library section. Then we can compile the code there with the cabal build command.

If we update our project to have a single executable run-project, then we can also run it with cabal run. But if our project has multiple executables, this won't work. We'll need to specify the name of the executable.

>> cabal run run-project-1
"Hello World 1!"
>> cabal run run-project-2
"Hello World 2!"

You can also run cabal configure. This will do some system checks to make sure you can actually build the program. For instance, it verifies that you have some version of ghc available. You can also use the command to change this compiler version. It also does some checks on the dependencies of your package.

Adding Dependencies

Speaking of dependencies, let's explore adding one to our project. Let's make our library depend on the split package. Using this library, we can make a lastName function like so:

import Data.List.Split(splitOn)

lastName :: String -> String
lastName inputName = splitOn " " inputName

When we try to build this project using, we'll see that our project can't find the Data.List.Split module...yet. We need to add split as a dependency in our library. To show version constraints, let's suppose we want to use the latest version, 0.2.3.3 at the time of writing.

library:
  build-depends:
      base >=4.9 && <4.10
    , split == 0.2.3.3
  ...

This still isn't quite enough! We actually need to install the split package on our machine first! The cabal install command will download the latest version of the package from hackage and put it in a global package index on our machine.

cabal install split

Once we've done this, we'll be able to build and run our project!

Conflicting Dependencies

Now, we mentioned that our package gets installed in a "global" index on our machine. If you've worked in software long enough, this might have set off some alarm bells. And indeed, this can cause some problems! We've installed version 0.2.3.3 globally. But what if another project wants a different version? Configuring will give an error like the following:

cabal: Encountered missing dependencies:
split ==0.2.3.2

And in fact it's very tricky to have both of these versions installed in the global index!

Using a Sandbox

The way around this is to sandbox our projects. When we do this, each dependency we get from Hackage will get installed in the project-specific sandbox, rather than a global index. Within our project directory, we can create a sandbox like so:

>> cabal sandbox init

Now this project will only look at the sandbox for its dependencies. So we'll see the same messages for the missing module when we try to build. But then we'll be fine when we install packages. We can run this version of the command to install all missing packages.

>> cabal install --only-dependencies

Now our project will build without issue, even with a different version of the package!

Conflicts within a Project

Let's consider another scenario that can introduce version conflicts. Suppose we have two dependencies, package A and package B. Each of these might depend on a third package, C. But package A might depend on version X of package C. And then B might depend on version Y of C. Because we can't have two versions of the package installed, our program won't build!

Sandboxes will prevent these issues from occurring across different projects on our machine. But this scenario is still possible inside a single project! And in this case, we'll have to do a lot more manual work than we bargained for. We'll have to go through the different versions of each package, look at their dependencies, and hope we can find some combination that works. It's a messy, error-prone process.

Next week, we'll see how to solve this issue with Stack!

Conclusion

So we can see now that Cabal can stand on its own. But it has certain weaknesses. Stack's main goal is to solve these weaknesses, particularly around dependency management. Next week, we'll see how this happens!

by James Bowen at January 13, 2020 03:30 PM

January 09, 2020

Mark Jason Dominus

Three Corners

I'm a fan of geographic oddities, and a few years back when I took a road trip to circumnavigate Chesapeake Bay, I planned its official start in New Castle, DE, which is noted for being the center of the only circular state boundary in the U.S.:

Map of Delaware, showing that its northern border (with Pennsylvania) is an arc of a circle; an adjoining map of just New Castle County has the city of New Castle highlighted, showing that New Castle itself is at the center of the circle.

The red blob is New Castle. Supposedly an early treaty allotted to Delaware all points west of the river that were within twelve miles of the State House in New Castle.

I drove to New Castle, made a short visit to the State House, and then began my road trip in earnest. This is a little bit silly, because the border is completely invisible, whether you are up close or twelve miles away, and the State House is just another building, and would be exactly the same even if the border were actually a semicubic parabola with its focus at the second-tallest building in Wilmington.

Whatever, I like going places, so I went to New Castle to check it out. Perhaps it was silly, but I enjoyed going out of my way to visit a point of purely geometric significance. The continuing popularity of Four Corners as a tourist destination shows that I'm not the only one. I don't have any plans to visit Four Corners, because it's far away, kinda in the middle of nowhere, and seems like rather a tourist trap. (Not that I begrudge the Navajo Nation whatever they can get from it.)

Four Corners is famously the only point in the U.S. where four state borders coincide. But a couple of weeks ago as I was falling asleep, I had the thought that there are many triple-border points, and it might be fun to visit some. In particular, I live in southeastern Pennsylvania, so the Pennsylvania-New Jersey-Delaware triple point must be somewhere nearby. I sat up and got my phone so I could look at the map, and felt foolish:

Map of the Pennsylvania-New Jersey-Delaware triple border, about a kilometer offshore from Marcus Hook, PA, further described below.

As you can see, the triple point is in the middle of the Delaware River, as of course it must be; the entire border between Pennsylvania and New Jersey, all the hundreds of miles from its northernmost point (near Port Jervis) to its southernmost (shown above), runs right down the middle of the Delaware.

I briefly considered making a trip to get as close as possible, and photographing the point from land. That would not be too inconvenient. Nearby Marcus Hook is served by commuter rail. But Marcus Hook is not very attractive as a destination. Having been to Marcus Hook, it is hard for me to work up much enthusiasm for a return visit.

But I may look into this further. I usually like going places and being places, and I like being surprised when I get there, so visting arbitrarily-chosen places has often worked out well for me. I see that the Pennsylvania-Delaware-Maryland triple border is near White Clay Creek State Park, outside of Newark, DE. That sounds nice, so perhaps I will stop by and take a look, and see if there really is white clay in the creek.

Who knows, I may even go back to Marcus Hook one day.

[ Addendum 20190114: More about nearby tripoints and related matters. ]

by Mark Dominus (mjd@plover.com) at January 09, 2020 04:15 PM

Tweag I/O

Introduction to Markov chain Monte Carlo (MCMC) Sampling, Part 2: Gibbs Sampling

Simeon Carstens

This is part 2 of a series of blog posts about MCMC techniques:

In the first blog post of this series, we discussed Markov chains and the most elementary MCMC method, the Metropolis-Hastings algorithm, and used it to sample from a univariate distribution. In this episode, we discuss another famous sampling algorithm: the (systematic scan) Gibbs sampler. It is very useful to sample from multivariate distributions: it reduces the complex problem of sampling from a joint distribution to sampling from the full conditional (meaning, conditioned on all other variables) distribution of each variable. That means that to sample from, say, \(p(x,y)\), it is sufficient to be able to sample from \(p(x|y)\) and \(p(y|x)\), which might be considerably easier. The problem of sampling from multivariate distributions often arises in Bayesian statistics, where inference of likely values for a parameter often entails sampling not only that parameter, but also additional parameters required by the statistical model.

Motivation

Why would splitting up sampling in this way be preferable? Well, it might turn the problem of sampling from one untractable joint distribution into sampling from several well-known, tractable distributions. If the latter (now conditional) distributions are still not tractable, at least you now can use different and well-suited samplers for each of them instead of sampling all variables with a one-size-fits-all sampler. Take, for example, a bivariate normal distribution with density \(p(x,y)\) that has very different variances for each variable:

import numpy as np

def log_gaussian(x, mu, sigma):
    # The np.sum() is for compatibility with sample_MH
    return - 0.5 * np.sum((x - mu) ** 2) / sigma ** 2 \
           - np.log(np.sqrt(2 * np.pi * sigma ** 2))


class BivariateNormal(object):
    n_variates = 2
    
    def __init__(self, mu1, mu2, sigma1, sigma2):
        self.mu1, self.mu2 = mu1, mu2
        self.sigma1, self.sigma2 = sigma1, sigma2
        
    def log_p_x(self, x):
        return log_gaussian(x, self.mu1, self.sigma1)
        
    def log_p_y(self, x):
        return log_gaussian(x, self.mu2, self.sigma2)
    
    def log_prob(self, x):        
        cov_matrix = np.array([[self.sigma1 ** 2, 0],
                               [0, self.sigma2 ** 2]])
        inv_cov_matrix = np.linalg.inv(cov_matrix)
        kernel = -0.5 * (x - self.mu1) @ inv_cov_matrix @ (x - self.mu2).T
        normalization = np.log(np.sqrt((2 * np.pi) ** self.n_variates * np.linalg.det(cov_matrix)))
        
        return kernel - normalization               

    
bivariate_normal = BivariateNormal(mu1=0.0, mu2=0.0, sigma1=1.0, sigma2=0.15)

The @ is a recent-ish addition to Python and denotes the matrix multiplication operator. Let's plot this density:

png

Now you can try to sample from this using the previously discussed Metropolis-Hastings algorithm with a uniform proposal distribution. Remember that in Metropolis-Hastings, a Markov chain is built by jumping a certain distance ("step size") away from the current state, and accepting or rejecting the new state according to an acceptance probability. A small step size will explore the possible values for \(x\) very slowly, while a large step size will have very poor acceptance rates for \(y\). The Gibbs sampler allows us to use separate Metropolis-Hastings samplers for \(x\) and \(y\) - each with an appropriate step size. Note that we could also choose a bivariate proposal distribution in the Metropolis-Hastings algorithm such that its variance in \(x\)-direction is larger than its variance in the \(y\)-direction, but let's stick to this example for didactic purposes.

The systematic scan Gibbs sampler

So how does Gibbs sampling work? The basic idea is that given the joint distribution \(p(x, y)\) and a state \((x_i, y_i)\) from that distribution, you obtain a new state as follows: first, you sample a new value for one variable, say, \(x_{i+1}\), from its distribution conditioned on \(y_i\), that is, from \(p(x|y_i)\). Then, you sample a new state for the second variable, \(y_{i+1}\), from its distribution conditioned on the previously drawn state for \(x\), that is, from \(p(y|x_{i+1})\). This two-step procedure can be summarized as follows: $$ \begin{align} x_{i+1} \sim& \ p(x|y_i) \\ y_{i+1} \sim& \ p(y|x_{i+1}) \end{align} $$ This is then iterated to build up the Markov chain. For more than two variables, the procedure is analogous: you pick a fixed ordering and draw one variable after the other, each conditioned on, in general, a mix of old and new values for all other variables.[1] Fixing an ordering, like this, is called a systematic scan, an alternative is the random scan where we'd randomly pick a new ordering at each iteration.

Implementing this Gibbs sampler for the above example is extremely simple, because the two variables are independent (\(p(x|y)=p(x)\) and \(p(y|x)=p(y)\)). We sample each of them with a Metropolis-Hastings sampler, implemented in the first blog post as the sample_MH function. As a reminder, that function takes as arguments, in that order,

  • the old state of a Markov chain (a one-dimensional numpy array),
  • a function returning the logarithm of the probability density function (PDF) to sample from,
  • a real number representing the step size for the uniform proposal distribution, from which a new state is proposed.

We then use sample_MH in the following, short function which implements the systematic scan Gibbs sampler:

def sample_gibbs(old_state, bivariate_dist, stepsizes):
    """Draws a single sample using the systematic Gibbs sampling
    transition kernel
    
    Arguments:
    - old_state: the old (two-dimensional) state of a Markov chain
                 (a list containing two floats)
    - bivariate_dist: an object representing a bivariate distribution
                      (in our case, an instance of BivariateNormal)
    - stepsizes: a list of step sizes
    
    """
    x_old, y_old = old_state
    
    # for compatibility with sample_MH, change floats to one-dimensional
    # numpy arrays of length one
    x_old = np.array([x_old])
    y_old = np.array([y_old])
    
    # draw new x conditioned on y
    p_x_y = bivariate_dist.log_p_x
    accept_x, x_new = sample_MH(x_old, p_x_y, stepsizes[0])
    
    # draw new y conditioned on x
    p_y_x = bivariate_dist.log_p_y
    accept_y, y_new = sample_MH(y_old, p_y_x, stepsizes[1])
    
    # Don't forget to turn the one-dimensional numpy arrays x_new, y_new
    # of length one back into floats
    
    return (accept_x, accept_y), (x_new[0], y_new[0])

The sample_gibbs function will yield one single sample from bivariate_normal. As we did in the previous blog post for the Metropolis-Hastings algorithm, we now write a function that repeatedly runs sample_gibbs to build up a Markov chain and call it:

def build_gibbs_chain(init, stepsizes, n_total, bivariate_dist):
    """Builds a Markov chain by performing repeated transitions using
    the systematic Gibbs sampling transition kernel
    
    Arguments:
    - init: an initial (two-dimensional) state for the Markov chain
            (a list containing two floats)
    - stepsizes: a list of step sizes of type float
    - n_total: the total length of the Markov chain
    - bivariate_dist: an object representing a bivariate distribution
                      (in our case, an instance of BivariateNormal)
    
    """
    init_x, init_k = init
    chain = [init]
    acceptances = []
    
    for _ in range(n_total):
        accept, new_state = sample_gibbs(chain[-1], bivariate_dist, stepsizes)
        chain.append(new_state)        
        acceptances.append(accept)
    
    acceptance_rates = np.mean(acceptances, 0)
    print("Acceptance rates: x: {:.3f}, y: {:.3f}".format(acceptance_rates[0],
                                                          acceptance_rates[1]))
    
    return chain 

stepsizes = (6.5, 1.0)
initial_state = [2.0, -1.0]
chain = build_gibbs_chain(initial_state, stepsizes, 100000, bivariate_normal)
chain = np.array(chain)
Acceptance rates: x: 0.462, y: 0.456

Tada! We used two very different step sizes and achieved very similar acceptance rates with both.
We now plot a 2D histogram of the samples (with the estimated probability density color-coded) and the marginal distributions:

plot_bivariate_samples(chain, burnin=200, pdf=bivariate_normal)

png

Looking at the path the Markov chain takes, we see several horizontal and vertical lines. These are Gibbs sampling steps in which only one of the Metropolis-Hastings moves was accepted.

A more complex example

The previous example was rather trivial in the sense that both variables were independent. Let's discuss a more interesting example, which features both a discrete and a continuous variable. We consider a mixture of two normal densities \(p_\mathcal{N}(x; \mu, \sigma)\) with relative weights \(w_1\) and \(w_2\). The PDF we want to sample from is then $$ p(x) = w_1p_\mathcal{N}(x; \mu_1, \sigma_1) + w_2p_\mathcal{N}(x; \mu_2, \sigma_2) \ \mbox . $$ This probability density is just a weighted sum of normal densities. Let's consider a concrete example, choosing the following mixture parameters:

mix_params = dict(mu1=1.0, mu2=2.0, sigma1=0.5, sigma2=0.2, w1=0.3, w2=0.7)

How does it look like? Well, it's a superposition of two normal distributions:

png

Inspired by this figure, we can also make the mixture nature of that density more explicit by introducing an additional integer variable \( k \in \{1,2\} \) which enumerates the mixture components. This will allow us to highlight several features and properties of the Gibbs sampler and to introduce an important term in probability theory along the way. Having introduced a second variable means that we can consider several probability distributions:

  • \(p(x,k)\): the joint distribution of \(x\) and \(k\) tells us how probable it is to find a value for \(x\) and a value for \(k\) "at the same time" and is given by $$ p(x,k) = w_k p_\mathcal{N}(x; \mu_k, \sigma_k) $$
  • \(p(x|k)\): the conditional distribution of \(x\) given \(k\) tells us the probability of \(x\) for a certain \(k\). For example, if \(k=1\), what is \(p(x|k)\)? Setting \(k=1\) means we're considering only the first mixture component, which is a normal distribution with mean \(\mu_1\) and standard deviation \(\sigma_1\) and thus \(p(x|k=1)=p_\mathcal{N}(x; \mu_1, \sigma_1)\). In general we then have $$ p(x|k) = p_\mathcal{N}(x; \mu_k, \sigma_k) \ \mbox . $$
  • \(p(k|x)\): assuming a certain value \(x\), this probability distribution tells us for each \(k\) the probability with which you would draw \(x\) from the mixture component with index \(k\). This probability is non-trivial, as the mixture components overlap and each \(x\) thus has a non-zero probability in each component. But Bayes' theorem saves us and yields $$ p(k|x) = \frac{p(x|k) p(k)}{p(x)} \ \mbox . $$
  • \(p(k)\): this is the probability of choosing a mixture component \(k\) irrespective of \(x\) and is given by the mixture weights \(w_k\).

The probability distributions \(p(x)\) and \(p(k)\) are related to the joint distribution \(p(x,k)\) by a procedure called marginalization. We marginalize \(p(x,k)\) over, say, \(k\), when we are only interested in the probability of \(x\), independent of a specific value for \(k\). That means that the probability of \(x\) is the sum of the probability of \(x\) when \(k=1\) plus the probability of \(x\) when \(k=2\), or, formally, $$ p(x)=\sum_{ k \in \{1, 2\} } p(x,k) \ \mbox . $$

With these probability distributions, we have all the required ingredients for setting up a Gibbs sampler. We can then sample from \(p(x,k)\) and reconstruct \(p(x)\) by marginalization. As marginalization means "not looking a variable", obtaining samples from \(p(x)\) given samples from \(p(x,k)\) just amounts to discarding the sampled values for \(k\).

Let's first implement a Gaussian mixture with these conditional distributions:

class GaussianMixture(object):
    
    def __init__(self, mu1, mu2, sigma1, sigma2, w1, w2):
        self.mu1, self.mu2 = mu1, mu2
        self.sigma1, self.sigma2 = sigma1, sigma2
        self.w1, self.w2 = w1, w2
        
    def log_prob(self, x):
        return np.logaddexp(np.log(self.w1) + log_gaussian(x, self.mu1, self.sigma1),
                            np.log(self.w2) + log_gaussian(x, self.mu2, self.sigma2))
    
    def log_p_x_k(self, x, k):
        # logarithm of p(x|k)
        mu = (self.mu1, self.mu2)[k]
        sigma = (self.sigma1, self.sigma2)[k]
    
        return log_gaussian(x, mu, sigma)
    
    def p_k_x(self, k, x):
        # p(k|x) using Bayes' theorem
        mu = (self.mu1, self.mu2)[k]
        sigma = (self.sigma1, self.sigma2)[k]
        weight = (self.w1, self.w2)[k]
        log_normalization = self.log_prob(x)

        return np.exp(log_gaussian(x, mu, sigma) + np.log(weight) - log_normalization)

The interesting point here (and, in fact, the reason I chose this example) is that \(p(x|k)\) is a probability density for a continuous variable \(x\), while \(p(k|x)\) is a probability distribution for a discrete variable. This means we will have to choose two very different sampling methods. While we could just use a built-in numpy function to draw from the normal distributions \(p(x|k)\), we will use Metropolis-Hastings. The freedom to do this really demonstrates the flexibility we have in choosing samplers for the conditional distributions.

So we need to reimplement sample_gibbs and build_gibbs_chain, whose arguments are very similar to the previous implementation, but with a slight difference: the states now consist of a float for the continuous variabe and an integer for the mixture component, and instead of a list of stepsizes we just need one single stepsize, as we have only one variable to be sampled with Metropolis-Hastings.

def sample_gibbs(old_state, mixture, stepsize):
    """Draws a single sample using the systematic Gibbs sampling
    transition kernel
    
    Arguments:
    - old_state: the old (two-dimensional) state of a Markov chain
                 (a list containing a float and an integer representing 
                 the initial mixture component)
    - mixture: an object representing a mixture of densities
               (in our case, an instance of GaussianMixture)
    - stepsize: a step size of type float 
    
    """
    x_old, k_old = old_state
    
    # for compatibility with sample_MH, change floats to one-dimensional
    # numpy arrays of length one
    x_old = np.array([x_old])
    
    # draw new x conditioned on k
    x_pdf = lambda x: mixture.log_p_x_k(x, k_old)
    accept, x_new = sample_MH(x_old, x_pdf, stepsize)
    
    # ... turn the one-dimensional numpy arrays of length one back
    # into floats
    x_new = x_new[0]
    
    # draw new k conditioned on x 
    k_probabilities = (mixture.p_k_x(0, x_new), mixture.p_k_x(1, x_new))
    jump_probability = k_probabilities[1 - k_old]
    k_new = np.random.choice((0,1), p=k_probabilities)
    
    return accept, jump_probability, (x_new, k_new)


def build_gibbs_chain(init, stepsize, n_total, mixture):
    """Builds a Markov chain by performing repeated transitions using
    the systematic Gibbs sampling transition kernel
    
    Arguments:
    - init: an initial (two-dimensional) state of a Markov chain
            (a list containing a one-dimensional numpy array
            of length one and an integer representing the initial
            mixture component)
    - stepsize: a step size of type float
    - n_total: the total length of the Markov chain
    - mixture: an object representing a mixture of densities
               (in our case, an instance of GaussianMixture)
    
    """
    init_x, init_k = init
    chain = [init]
    acceptances = []
    jump_probabilities = []
    
    for _ in range(n_total):
        accept, jump_probability, new_state = sample_gibbs(chain[-1], mixture, stepsize)
        chain.append(new_state)
        jump_probabilities.append(jump_probability)
        acceptances.append(accept)
    
    acceptance_rates = np.mean(acceptances)
    print("Acceptance rate: x: {:.3f}".format(acceptance_rates))
    print("Average probability to change mode: {}".format(np.mean(jump_probabilities)))
    
    return chain

mixture = GaussianMixture(**mix_params)
stepsize = 1.0
initial_state = [2.0, 1]
chain = build_gibbs_chain(initial_state, stepsize, 10000, mixture)
burnin = 1000
x_states = [state[0] for state in chain[burnin:]]
Acceptance rate: x: 0.631
Average probability to change mode: 0.08629295966662387

Plotting a histogram of our samples shows that the Gibbs sampler correctly reproduces the desired Gaussian mixture:

png

You might wonder why we're also printing the average probability for the chain to sample from the component it is currently not in. If this probability is very low, the Markov chain will get stuck for some time in the current mode and thus will have difficulties exploring the distribution rapidly. The quantity of interest here is \(p(k|x)\): it is the probability of a certain component \(k\) given a certain value \(x\) and can be very low if the components are more separated and \(x\) is more likely to be in the component which is not \(k\). Let's explore this behavior by increasing the separation between the means of the mixture components:

mixture = GaussianMixture(mu1=-1.0, mu2=2.0, sigma1=0.5, sigma2=0.2, w1=0.3, w2=0.7)
stepsize = 1.0
initial_state = [2.0, 1]
chain = build_gibbs_chain(initial_state, stepsize, 100000, mixture)
burnin = 10000
x_states = [state[0] for state in chain[burnin:]]
Acceptance rate: x: 0.558
Average probability to change mode: 6.139534006013391e-06

Let's plot the samples and the true distribution and see how the Gibbs sampler performs in this case:

png

You should see the probability decrease significantly and perhaps one of the modes being strongly over- and the other undersampled. The lesson here is that the Gibbs sampler might produce highly correlated samples. Again—in the limit of many, many samples—the Gibbs sampler will reproduce the correct distribution, but you might have to wait a long time.

Conclusions

By now, I hope you have a basic understanding of why Gibbs sampling is an important MCMC technique, how it works, and why it can produce highly correlated samples. I encourage you again to download the full notebook and play around with the code: you could try using the normal function from the numpy.random module instead of Metropolis-Hastings in both examples or implement a random scan, in which the order in which you sample from the conditional distributions is chosen randomly.

Or you could read about and implement the collapsed Gibbs sampler, which allows you to perfectly sample the Gaussian mixture example by sampling from \(p(k)\) instead of \(p(k|x)\). Or you can just wait a little more for the next post in the series, which will be about Hybrid Monte Carlo (HMC), a fancy Metropolis-Hastings variant which takes into account the derivative of the log-probability of interest to propose better, less correlated, states!


  1. It's important to note, though, that the transition kernel given by the above procedure does not define a detailed-balanced transition kernel for a Markov chain on the joint space of \(x\) and \(y\). One can show, though, that for each single variable, this procedure is a detailed-balanced transition kernel and the Gibbs sampler thus constitutes a composition of Metropolis-Hastings steps with acceptance probability 1. For details, see, for example, this stats.stackexchange.com answer. ↩︎

January 09, 2020 12:00 AM

January 06, 2020

Monday Morning Haskell

Organizing Our Package!

cabal_file.jpg

To start off the new year, we're going to look at the process of creating and managing a new Haskell project. After learning the very basics of Haskell, this is one of the first problems you'll face when starting out. Over the next few weeks, we'll look at some programs we can use to manage our packages, such as Cabal, Stack, and Nix.

The first two of these options both use the .cabal file format for the project specification file. So to start this series off, we're going to spend this week going through this format. We'll also discuss, at a high level, what a package consists of and how this format helps us specify it.

If you'd like to skip ahead a little and get to the business of writing a Haskell project, you should take our free Stack mini-course! It will teach you all the most basic commands you'll need to get your first Haskell project up and running. If you're completely new to Haskell, you should first get familiar with the basics of the language! Head to our Liftoff Series for help with that!

What is a Package Anyway?

The .cabal file describes a single Haskell package. So before we understand this file or its format, we should understand what a package is. Generally speaking, a package has one of two goals.

  1. Provide library code that others can use within their own Haskell projects
  2. Create an executable program that others can run on their computers

In the first case, we'll usually want to publish our package on a repository (such as Hackage or Github) so that others can download it and use it. In the second case, we can allow others to download our code and compile it from source code. But we can also create the binary ourselves for certain platforms, and then publish it.

Our package might also include testing code that is internal to the project. This code allows us (or anyone using the source code) to verify the behavior of the code.

So our package contains some combination of these three different elements. The main job of the .cabal file is to describe these elements: the source code files they use, their dependencies, and how they get compiled.

Any package has a single .cabal file, which should bear the name of the package (e.g. MyPackage.cabal). And a .cabal file should only correspond to a single package. That said, it is possible for a single project on your machine to contain many packages. Each sub-package would have its own .cabal file. Armed with this knowledge, let's start exploring the different areas of the .cabal file.

Metadata

The most basic part of the .cabal file is the metadata section at the top. This contains information useful information about the package. For starters, it should have the project's name, as well as the version number, author name, and a maintainer email.

name: MyProject
version: 0.1.0.0
author: John Smith
maintainer: john@smith.com

It can also specify information like a license file and copyright owner. Then there are a couple other fields that tell the Cabal package manager how to build the package. These are the cabal-version and the build-type (usually Simple).

license-file: LICENSE
copyright: Monday Morning Haskell 2020
build-type: Simple
cabal-version: >=1.10

The Library

Now the rest of our the .cabal file will describe the different code elements of our project. The format for these sections are all similar but with a few tweaks. The library section describes the library code for our project. That is, the code people would have access to when using our package as a dependency. It has a few important fields.

  1. The "exposed" modules tell us the public API for our library. Anyone using our library as a dependency can import these modules.
  2. "Other" modules are internal source files that other users shouldn't need. We can omit this if there are none.
  3. We also provide a list of "source" directories for our library code. Any module that lives in a sub-directory of one of these gets namespaced.
  4. We also need to specify dependencies. These are other packages, generally ones that live in Hackage, that our project depends on. We can provide various kinds of version constraints on these.
  5. Finally, there is a "default language" section. This either indicates Haskell2010, or Haskell1998. The latter has fewer language features and extensions. So for newer projects, you should almost always use Haskell2010.

Here's a sample library section:

library:
  exposed-modules:
      Runner
      Router
      Schema
  other-modules:
      Internal.OptionsParser
      Internal.JsonParser
  hs-source-dirs:
      src
  build-depends:
      base >=4.9 && <4.10
  default-language: Haskell2010

A couple other optional fields are ghc-options and default-language-extensions. The first specifies command line options for GHC that we want to include whenever we build our code. The second, specifies common language extensions to use by default. This way, we don't always need {-# LANGUAGE #-} pragmas at the tops of all our files.

library:
  ...
  ghc-options:
    -Wall
    -Werror
  default-extensions:
    OverloadedStrings
    FlexibleContexts

The library is the most important section of the file, and probably the one you'll update the most. You'll need to update the build-depends section each time you add a new dependency. And you'll update one of the modules sections every time you make a new file. There are several other fields you can use for various circumstances, but these should be enough to get you started.

Executables

Now we can also provide different kinds of executable code with our package. This allows us to produce a binary program that others can run. We specify such a binary in an "executable" section. Executables have similar fields, such as the source directory, language , and dependency list.

Instead of listing exposed modules and other modules, we specify one file as the main-is for running the program. This file should contain a main expression of type IO () that gets run when executing the binary.

The executable can (and generally should) import our library as a dependency. We use the name of our project in the build-depends section to indicate this.

exectuable run-my-project
  main-is: RunProject.hs
  hs-source-dirs: app
  build-depends:
      base >=4.9 && <4.10
    , MyProject
  default-language: Haskell2010

As we explore different package managers, we'll see how we can build and run these executables.

Test Suites

There are a couple special types of executables we can make as well. Test suites and benchmarks allow us to test out our code in different ways. Their format is almost identical to executables. You would use the word test-suite or benchmark instead of executable. Then you must also provide a type, describing how the program should exit if the test fails. In most cases, you'll want to use exitcode-stdio-1.0.

test-suite test-my-project
  type: exitcode-stdio-1.0
  main-is: Test.hs
  hs-source-dirs: test
  build-depends:
      base >=4.9 && <4.10
    , hunit
    , MyProject
  default-language: Haskell2010

As we mentioned, a test suite is essentially a special type of executable. Thus, it will have a "main" module with a main :: IO () expression. Any testing library will have some kind of special main function that allows us to create test cases. Then we'll have some kind of special test command as part of our package manager that will run our test suites.

Conclusion

That wraps up the main sections of our .cabal file. Knowing the different pieces of our project is the first step towards running our code. Next week, we'll start exploring the different tools that will make use of this project description. We'll start with Cabal, and then move onto Stack.

by James Bowen at January 06, 2020 03:30 PM

January 05, 2020

Gabriel Gonzalez

Dhall - Year in review (2019-2020)

dhall-2020

The Dhall configuration language is now three years old and this post reviews progress in 2019 and the future direction of the language in 2020.

If you’re not familiar with Dhall, you might want to visit the official website for the language. This post assumes familiarity and interest in the language.

I would like to use this post to advertise a short survey you can take if you would like to provide feedback that informs the direction of the language:

Language bindings

Last year’s survey indicated that many respondents were keen on additional language bindings, which this section covers.

This year there is a new officially supported language binding! �

  • dhall-rust - Rust bindings to Dhall by Nadrieril

    I’m excited about this binding, both because Rust is an awesome language and also because I believe this paves the way for C/C++ bindings (and transitively any language that can interop with C)

There is also one new language binding close to completion:

  • dhall-golang - Go bindings to Dhall by Philip Potter

    This binding is not yet official, but I’m mentioning here in case interested parties might want to contribute. If you are interested in contributing then this thread is a good starting point.

    This is a binding that I believe would improve the user experience for one of Dhall’s largest audiences (Ops / CI / CD), since many tools in this domain (such as Kubernetes) are written in Go.

If there is a language binding that you would most like to see the survey includes a question to let you advertise your wish list for language bindings.

As I mentioned last year, I have no plans to implement any new language bindings myself. However, there are always things I can do to improve the likelihood of new language bindings popping up:

  • Error messages

    I’ve noticed that a major barrier for a new implementation is adding quality error messages.

    One solution to this problem may be taking the error messages for the Haskell implementation and upstreaming them into shared templates that all implementations can reuse (and improve upon)

  • Reference implementation

    One idea I’ve floated a few times recently is having a simplified reference implementation in some programming language instead of using natural deduction as the notation for specifying language semantics. This might help ease the life of people who are not as familiar with programming language theory and its notation.

    For example, the current Haskell implementation is not suitable as a reference implementation because it operates under a lot of constraints that are not relevant to the standard (such as customization, formatting, and performance).

  • Simplify the standard

    This year we removed several stale features (such as old-style Optional literals and old-style union literals) in the interest of decreasing the cost for language binding maintainers.

    There is also one feature that in my eyes is “on the chopping block�, which is the language’s using keyword for custom headers. This feature is one of the more complex ones to implement correctly and doesn’t appear to be carrying its own weight. There also may be preferable alternatives to this feature that don’t require language support (such as .netrc files).

Integrations

There are other integrations that are not language bindings, which this section covers.

PureScript package sets

Dhall is now the officially supported way of specifying PureScript package sets:

… and there is a new PureScript build tool named spago that provides the command-line interface to using these package sets:

There are several contributors to both of these repositories, so I can’t acknowledge them all, but I would like to give special mention to Justin Woo and Fabrizio Ferrai for bootstrapping these projects.

This integration is the largest case I’m aware of where Dhall is not being used for its own sake but rather as a required configuration format for another tool.

YAML/JSON

Last year we added support for converting Dhall to JSON/YAML and this year antislava and Robbie McMichael also added support for converting JSON/YAML to Dhall. Specifically, there are two new json-to-dhall and yaml-to-dhall executables that you can use.

This addressed a common point of feedback from users that migrating existing YAML configuration files to Dhall was tedious and error-prone. Now the process can be automated.

This year we also added Prelude support for JSON and YAML. Specifically:

  • There is a new Prelude.JSON.Type that can model arbitrary schema-free JSON or YAML

  • There is a new Prelude.JSON.render utility that can render expressions of the above type as JSON or YAML Text that is guaranteed to be well-formed

Here is an example of how it works:

In other words, there is a now a “pure Dhall� implementation of JSON/YAML support, although it is not as featureful as the dhall-to-{json,yaml} executables.

Special thanks to Philipp Krüger for contributing Prelude.JSON.renderYAML!

On top of that, {yaml,json}-to-dhall and dhall-to-{yaml,json} both natively support the schema-free JSON type from the Prelude, which means that you can now incrementally migrate YAML/JSON configuration files. You can learn more about this from the following chapter in the Dhall Configuration Language Manual:

XML

Thanks to Stephen Weber Dhall now supports XML:

The above package provides dhall-to-xml and xml-to-dhall utilities for converting between Dhall and XML. This package also provides a Ruby API to this functionality as well.

This fills one of the big omissions in supported configuration formats that we had last year, so I’m very thankful for this contribution.

Rails

The same Stephen Weber also contributed Rails support for Dhall:

… so that you can use Dhall as the configuration file format for a Rails app instead of YAML.

C package management

Vanessa McHale built a C package manager named with an emphasis on cross compilation:

The current package set already supports a surprisingly large number of C packages!

I also find this project fascinating because I’ve seen a few people discuss what Nixpkgs (the Nix package repository) might look like if it were redone from the ground up in terms of Dhall. cpkg most closely resembles how I imagined it would be organized.

Language improvements

Last year some survey respondents were interested more in improvements to the language ergonomics rather than specific integrations, so this section covers new language enhancements.

Consuming packages

One thing we improved this year was the experience for people consuming Dhall packages.

Probably the biggest improvement was changing to “stable hashes�, where we stopped using the standard version as an input to semantic hashes. Users complained about each new version of the standard breaking their integrity checks, and now that is a thing of the past. This means that expressions authored for older versions of the language are now far more likely to work for newer language versions when protected by an integrity check.

Oliver Charles also contributed another large improvement by standardizing support for mixed records of types and terms. This means that package authors can now serve both types and terms from the same top-level package.dhall file instead of having to author separate types.dhall and terms.dhall files.

For example, the Prelude now serves both terms and types from a single package:

The above example also illustrates how field names no longer need to be escaped if they conflict with reserved names (like Type). This improves the ergonomics of using the Prelude which had several field names that conflicted with built-in language types and previously had to be escaped with backticks.

Authoring packages

We also improved the experience for users authoring new packages. Dhall now has language support for tests so that package authors do not need to implement testing infrastructure out of band.

You can find several examples of this in the Prelude, such as the tests for the Prelude.Natural.greaterThan utility:

The above example shows how you can not only write unit tests but in limited cases you can also write property tests. For example, the above property0 test verifies that greaterThan n n is False for all possible values of n.

Dependent types

Language support for tests is a subset of a larger change: dependent types. Dhall is now a technically dependently-typed language, meaning that you can take advantage of some basic features of dependent types, such as:

  • Type-level assertions (i.e. the tests we just covered)
  • Type-level literals (such as Natural and Text)

… but you cannot do more sophisticated things like length-indexed Lists.

toMap keyword

This year Mario Blažević added a new toMap keyword for converting Dhall records to homogeneous lists of key-value pairs (a.k.a. Maps):

Dhall users frequently requested this feature for supporting JSON/YAML-based formats. These formats commonly use dictionaries with a variable set of fields, but this led to an impedance mismatch when interoperating with a typed language like Dhall because Dhall records are not homogeneous maps and the type of a Dhall record changes when you add or remove fields.

Normally the idiomatic way to model a homogeneous Map in Dhall would be a List of key-value pairs (since you can add or remove key-value pairs without changing the type of a List), but that’s less ergonomic than using a record. The toMap keyword gives users the best of both worlds: they can use Dhall’s record notation to ergonomically author values that they can convert to homogeneous Maps using toMap.

The :: operator for record completion

Several users complained about the language’s support for records with defaultable fields, so we added a new operator to make this more ergonomic.

This example illustrates how the operator works:

In other words, given a “schema� record (such as Person) containing a record type and a record of default values, you can use that schema to instantiate a record, defaulting all fields that are not specified.

The operator is “syntactic sugar�. When you write:

… that “desugars� to:

Also, dhall format will recognize this operator and format the operator compactly for large nested records authored using this operator.

The easiest way to motivate this change is to compare the dhall-kubernetes simple deployment example before and after using this operator. Before, using the // operator and old formatting rules, the example looked like this:

Now the most recent iteration of the example looks like this:

New built-ins

One change with a high power-to-weight ratio was adding new built-ins. Without listing all of them, the key changes were:

  • Integers are no longer opaque. You can convert back and forth between Integer and Natural and therefore implement arbitrary arithmetic on Integers.

  • Some new built-ins enabled new efficient Prelude utilities that would have been prohibitively slow otherwise

  • Some things that used to require external command-line tools can now be implemented entirely within the language (such as modeling and rendering JSON/YAML in “pure Dhallâ€� as mentioned above)

Note that Text is still opaque, although I predict that is the most likely thing that will change over the next year if we continue to add new built-ins.

Enums

Enums are now much more ergonomic in Dhall, as the following example illustrates:

More generally, union alternatives can now be empty, like this:

… and enums are the special case where all alternatives are empty.

Before this change users would have to use an alternative type of {}, like this:

… which made things more verbose both for authors and consumers of Dhall packages.

Tooling improvements

This section covers improvements to the the tooling in order to provide a more complete development experience.

Language server

Last year I stated that one of our goals was to create a Dhall language server for broader better editor support and I’m happy to announce that we accomplished that goal!

Credit goes to both PanAeon (who authored the initial implementation) and Folkmar Ramcke (who greatly expanded upon the initial implementation as part of a Google Summer of Code project).

You can read the final report at the end of Folkmar’s work here:

… and this GIF gives a sample of what the language server can do:

The language server was tested to work with VSCode but in principle should work with any editor that supports the language server protocol with a small amount of work. I’ve personally tested that the language server works fine with Vim/Neovim.

If you have any issues getting the language server working with your editor of choice just let us know as we plan to polish and document the setup process for a wide variety of editors.

Also, we continue to add new features to language server based on user feedback. If you have cool ideas for how to make the editor experience more amazing please share them with us!

Pre-built executables for all platforms

Several users contributed continuous delivery support so that we could automatically generate pre-built executables for the shared command-line tools, including the dhall command and dhall-lsp-server (the language server).

That means that for each new release you can download pre-built executables for:

  • Windows
  • OS X
  • Linux

… from this page:

Docker support

You can also obtain docker containers for many of the command-line tools for ease of integration with your company’s container-based infrastructure:

Performance

The Haskell implementation (which powers the dhall tool and the language server) has undergone some dramatic performance improvements over the last year.

Most of these performance improvements were in response to the following two pressures on the language:

  • The language server requires a snappy feedback loop for productive editing

  • People are commonly using Dhall on very large program configurations (like dhall-kubernetes)

There is still room for improvement, but it is markedly better for all Dhall configuration files and orders of magnitude faster in many cases compared to a year ago.

Formatting improvements

The standard formatter is probably one of the things I get the most feedback about (mostly criticism 🙂), so I’ve spent some time this year on improving the formatter in the following ways:

  • let-related comments are now preserved

    … and I plan to expand support for preserving more comments

  • Expressions are now much more compact than before

    … such as in the dhall-kubernetes sample code above

Additionally, I recently started a discussion about potentially switching to ASCII as the default for formatting code, which you can follow here:

The outcome of that discussion was to add several new survey questions to assess whether people prefer to read and write Dhall code using ASCII or Unicode. So if you have strong opinions about this then please take the survey!

Dhall packages

Another significant component of the Dhall ecosystem is packages written within the language.

Dhall differentiates itself from other programmable file formats (e.g. Jsonnet) by having hundreds of open source packages built around the language that support for a variety of tools and formats (especially in the Ops / CI / CD domain).

In fact, Dhall’s open source footprint large enough this year that GitHub now recognizes Dhall as a supported file format. This means that files with a .dhall extension now enjoy syntax highlighting on GitHub and show up in language statistics for projects.

Here are some example Dhall bindings to various formats added last year:

I’m highly grateful for every person who improves the ecosystem. In fact, I randomly stalk Dhall packages on GitHub to inform language design by seeing how people use Dhall “in the wild�.

Shared infrastructure

We made two main improvements to shared infrastructure for the Dhall community this year:

Documentation

The Dhall wiki has been moved to docs.dhall-lang.org thanks to work by Tristan de Cacqueray. This means that:

  • The documentation is now generated using Sphinx

  • The documentation is now much easier to contribute to as it is under version control here:

    dhall-lang/docs

So if you would like to improve the documentation you can now open a pull request to do so!

Discourse

We also have a new Discourse forum hosted at discourse.dhall-lang.org that you can use to discuss anything Dhall-related.

We’ve been using the forum so far for announcing projects / releases and also as a sounding board for ideas.

Funding

Last year I solicited ideas for funding improvements to the Dhall ecosystem and this year we followed through on three different funding mechanisms:

Google Summer of Code

The most successful funding source by far was Google’s Summer of Code grant that funded Folkmar Ramcke to develop the language server. I plan to try this again for the upcoming summer and I will also recommend that other Dhall projects and language bindings try this out, too. Besides providing a generous source of funding (thank you, Google 🙇�♂�) this program is an excellent opportunity to bring in new contributors to the ecosystem.

Open Collective

Another thing we set up this year is an Open Collective for Dhall so that we can accept donations from companies and individuals. We started this only a few months ago and thanks to people’s generosity we’ve accumulated over $500 in donations.

I would like to give special thanks to our first backer:

… and our largest backer:

We plan to use these donations to fund projects that (A) benefit the entire Dhall community and (B) bring in new contributors, so your donations help promote a vibrant and growing developer community around the language.

Our first such project was to implement “pure Dhall� support for rendering YAML:

… and we have another proposal in progress to fund documenting the setup process for the language server for various editors:

If you would like to donate, you can do so here:

Book

I’m also working on the following book:

My plan is to make the book freely available using LeanPub but to give users the option to pay for the book, with all proceeds going to the above Open Collective for Dhall.

One of the strongest pieces of survey feedback we got was that users were willing to pay for Dhall-related merchandise (especially books) and they were highly eager for documentation regarding best practices for the language. This book intends to address both of those points of feedback.

I expect that at the current rate of progress the book will likely be done by the end of this year, but you can already begin reading the chapters that I’ve completed so far:

Future directions

Marketing

One of the things that’s slowly changing about the language is how we market ourselves. People following the language know that we’ve recently revamped the website:

… and changed our “slogan� to “Maintainable configuration files�.

One difference from last year is that we’re no longer trying to replace all uses for YAML. User feedback indicated that some uses of YAML were better served by TOML rather than Dhall. Specifically, small (~10 line) configuration files for simple command-line tools were cases where TOML was a better default choice than Dhall.

On the other hand, we find that users that deal with very large and fragmented program configurations tend to prefer Dhall due the language’s support for features that promote maintainability and reduce total cost of ownership.

I continue to prioritize Ops / CI / CD use cases, but I no longer try to displace YAML for use cases where TOML would be a more appropriate choice.

Completing the Dhall manual

One of my personal goals is to complete the Dhall manual to help people confidently recommend Dhall to others by providing high-quality material to help onboard their coworkers. I expect this will help accelerate language adoption quite a bit.

Polish the language server

The language server is another area of development that I see as highly promising. Although we currently provide common features like type-on-hover and intelligent auto-completion I still think there is a lot of untapped potential here to really “wow� new users by showcasing Dhall’s strengths.

People currently have really low expectations for programmable file formats, so I view the quality of the language server implementation as being a way that Dhall can rapidly differentiate itself from competing programmable file formats. In particular, Dhall is one of the few typed configuration formats and quality editor support is one of the easiest ways to convey the importance of using a typed language.

Packaging for various distributions

One thing that the Dhall ecosystem would benefit from is packaging, not just for Linux distributions but other platforms as well. We made progress this year by adding support for Brew and Docker, but there are still important omissions for other platforms, such as:

  • Windows (e.g. Nuget)
  • Linux (e.g. Debian/Fedora/Arch)

This one of the areas where I have the greatest difficulty making progress because each package repository tends to have a pretty high barrier to entry in my experience. If you are somebody who has experience with any of the above package repositories and could help me get started I would appreciate it!

Package discovery

I mentioned earlier that Dhall is growing quite a large open source package footprint, but these packages are not easy to discover.

One effort to address this is:

… which is working to create a “mono-repo� of Dhall packages to promote discoverability.

In addition to that, the language probably also needs a standard documentation generator. There have been a few nascent efforts along these lines this year, but at some point we need to take this idea “all the way�.

Library of Kubernetes utilities

Last year I mentioned that I would spend some time on a new Ops-related Dhall integration and I quickly gravitated towards improving the existing dhall-kubernetes integration. After familarizing myself with Kubernetes I realized that this is a use case that is served well by Dhall since Kubernetes configurations are highly unmaintainable.

Programmable Kubernetes configuration files are a bit of a crowded field (a cottage industry, really), with a steady stream of new entrants (like Pulumi, Cue, and Tanka). That said, I’m fairly confident that with some attention Dhall can become the best-in-class solution in this space.

Conclusion

I would like thank everybody who contributed last year, and I apologize if I forgot to acknowledge your contribution.

This post is not an exhaustive list of what happened over the last year. If you would like to learn more, the best places to start are:

Please don’t forget to take our yearly survey to provide feedback on the language or to inform the future direction:

In a month I will follow up with another post reviewing the survey feedback.

by Gabriel Gonzalez (noreply@blogger.com) at January 05, 2020 09:55 PM

January 04, 2020

Jasper Van der Jeugt

Mandelbrot & Lovejoy's Rain Fractals

Summary

At some point during ICFP2019 in Berlin, I came across a completely unrelated old paper by S. Lovejoy and B. B. Mandelbrot called “Fractal properties of rain, and a fractal model”.

While the model in the paper is primarily meant to model rainfall; the authors explain that it can also be used for rainclouds, since these two phenomena are naturally similarly-shaped. This means it can be used to generate pretty pictures!

While it looked cool at first, it turned out to be an extremely pointless and outdated way to generate pictures like this. But I wanted to write it up anyway since it is important to document failure as well as success: if you’ve found this blogpost searching for an implementation of this paper; well, you have found it, but it probably won’t help you. Here is the GitHub repository.

The good parts

I found this paper very intriguing because it promises a fractal model with a number of very attractive features:

  • is extremely simple
  • has easy to understand parameters
  • is truly self-similar at different scales
  • it has great lacunarity (I must admit I didn’t know this word before going through this paper)

Most excitingly, it’s possible to do a dimension-generic implementation! The code has examples in 2D as well as 3D (xy, time), but can be used without modifications for 4D (xyz, time) and beyond. Haskell’s type system allows capturing the dimension in a type parameter so we don’t need to sacrifice any type safety in the process.

For example, here the dimension-generic distance function I used with massiv:

distance :: M.Index ix => ix -> ix -> Distance
distance i j = Distance . sqrt .
    fromIntegral .  M.foldlIndex (+) 0 $
    M.liftIndex2 (\p s -> (p - s) * (p - s)) i j

Here is a 3D version:

The (really) bad parts

However, there must be a catch, right? If it has all these amazing properties, why is nobody using it? I didn’t see any existing implementations; and even though I had a very strong suspicion as to why that was the case, I set out to implement it during Munihac 2019.

As I was working on it, the answer quickly became apparent – the algorithm is so slow that its speed cannot even be considered a trade-off, its slowness really cancels out all advantages and then some! BitCoin may even be a better use of compute resources. The 30 second video clip I embedded earlier took 8 hours to render on a 16-core machine.

This was a bit of a bummer on two fronts: the second one being that I wanted to use this as a vehicle to learn some GPU programming; and it turned out to be a bad fit for GPU programming as well.

At a very high-level, the algorithm repeats the following steps many, many times:

  1. At random, pick a position in (or near) the image.
  2. Pick a size for your circular shape in a way that the probability of the size being larger than P is P⁻¹.
  3. Draw the circular shape onto the image.

This sounds great for GPU programming; we could generate a large number of images and then just sum them together. However, the probability distribution from step 2 is problematic. Small (≤3x3) shapes are so common that it seems faster use a CPU (or, you know, 16 CPUs) and just draw that specific region onto a single image.

The paper proposes 3 shapes (which it calls “pulses”). It starts out with just drawing plain opaque circles with a hard edge. This causes some interesting but generally bad-looking edges:

Hard circular pulses

It then switches to using circles with smoothed edges; which looks much better, we’re getting properly puffy clouds here:

Smooth circular pulses

Finally, the paper discusses drawing smoothed-out annuli, which dramatically changes the shapes of the clouds:

Annular pulses

It’s mildly interesting that the annuli become hollow spheres in 3D.

Thanks to Alexey for massiv and a massive list of suggestions on my implementation!

by Jasper Van der Jeugt at January 04, 2020 12:00 AM

January 03, 2020

Matt Parsons

Plucking Constraints

There’s a Haskell trick that I’ve observed in a few settings, and I’ve never seen a name put to it. I’d like to write a post about the technique and give it a name. It’s often useful to write in a type class constrained manner, but at some point you need to discharge (or satisfy?) those constraints. You can pluck a single constraint at a time.

This technique is used primarily used in mtl (or other effect libraries), but it also has uses in error handling.

Gathering Constraints

We can easily gather constraints by using functions that require them. Here’s a function that has a MonadReader Int constraint:

number :: (MonadReader Int m) => m Int
number = ask

Here’s another function that has a MonadError String constraint:

woops :: (MonadError String m) => m void
woops = throwError "woops!"

And yet another function with a MonadState Char constraint:

update :: (MonadState Char m) => m ()
update = modify succ

We can seamlessly write a program that uses all of these functions together:

program = do
    number
    woops
    update

GHC will happily infer the type of program:

program
    :: ( MonadReader Int m
       , MonadError String m
       , MonadState Char m
       )
    => m ()

At some point, we’ll need to actually use this. Virtually all Haskell code that gets used is called from main :: IO ().

Let’s try just using it directly:

main :: IO ()
main = program

GHC is going to complain about this. It’s going to say something like:

No instance for `MonadReader Int IO`  arising from a use of `program`
    ....

No instance for `MonadState Char IO` arising from a use of `program`
    ....

Couldn't match type `IOException` with type `String`
    ....

This is GHC’s way of telling us that it doesn’t know how to run our program in IO. Perhaps the IO type is not powerful enough to do all the stuff we want as-is. And it has a conflicting way to throw errors - the MonadError instance is for the IOException type, not the String that we’re trying to use. So we have to do something differently.

Unify

Let’s try figuring out what GHC is doing with main = program. First, we’ll look at the equations:

program
    :: ( MonadReader Int m
       , MonadError String m
       , MonadState Char m
       )
    => m  ()
main 
    :: IO ()

GHC sees that the “shape” of these types is similar. It can substitute IO for m in program. Does that work?

program
    :: ( MonadReader Int IO
       , MonadError String IO
       , MonadState Char IO
       )
    => IO ()

Yeah! That looks okay so far. Now, we have a totally concrete constraint: MonadReader Int IO doesn’t have any type variables. So let’s look it up and see if we can find an instance

. . .

Unfortunately, there’s no instance defined like this. If there’s no instance for IO, then how are we going to satisfy that constraint? We need to get rid of it and discharge it somehow!

The mtl library gives us a type that’s sole responsibility is discharging the MonadReader instance: ReaderT. Let’s check out the runReaderT function:

runReaderT 
    :: ReaderT r m a 
    -> r
    -> m a

runReaderT says:

My first argument is a ReaderT r m a. My second argument is the r environment. And then I’ll take off the ReaderT business on the type, returning only m.

We’re going to pluck off that MonadReader constraint by turning it into a concrete type. And runReaderT is one way to do that plucking.

GHC inferred a pretty general type for program earlier, but we can pick a more concrete type.

program
    :: ( MonadError String n
       , MonadState Char n
       )
    => ReaderT Int n ()

Notice how we’ve shifted a constraint into a concrete type. We’ve fixed the type of m to be ReaderT Int n, and all the other constraints got delegated down to this new type variable n. We don’t need to pick this concrete type at our definition site of program. Indeed, we can provide that annotation somewhere else, like in main:

main :: IO ()
main =
    let 
        program' 
            :: ( MonadError String n
               , MonadState Char n
               )
            => ReaderT Int n ()
        program' = 
            program
     in
        runReaderT program' 3

We’re literally saying “program' is exactly like program but we’re making it a tiny bit more concrete.”

Now, GHC still isn’t happy. It’s going to complain that there’s no instance for MonadState Char IO and that String isn’t equal to IOException. So we have a little more work to do.

Fortunately, the mtl library gives us types for plucking these constraints off too. StateT and runStateT can be used to pluck off a MonadState constraint, as well as ExceptT and runExceptT.

Let’s write program'', which will use StateT to ‘pluck’ the MonadState Char constraint off.

main :: IO ()
main =
    let 
        program' 
            :: ( MonadError String n
               , MonadState Char n
               )
            => ReaderT Int n ()
        program' = 
            program

        program''
            :: (MonadError String n)
            => ReaderT Int (StateT Char n) ()
        program'' =
            program'

        programRead 
            :: (MonadError String n)
            => StateT Char n ()
        programRead =
            runReaderT program'' 3
     in
        runStateT programRead 'c'

GHC still isn’t happy - it’s going to complain that () and ((), Char) aren’t the same types. Also we still haven’t dealt with IOException and String being different.

So let’s use ExceptT to pluck out that final constraint.

main :: IO ()
main =
    let 
        program' 
            :: ( MonadError String n
               , MonadState Char n
               )
            => ReaderT Int n ()
        program' = 
            program

        program''
            :: (MonadError String n)
            => ReaderT Int (StateT Char n) ()
        program'' =
            program'

        program''' 
            :: (Monad m)
            => ReaderT Int (StateT Char (ExceptT String m) ()
            -> m ()
        program''' =
            program''
-- ... snip ...

Okay, so I’m going to snip here and talk about something interesting. When we plucked the MonadError constraint out, we didn’t totally remove it. Instead, we’re left with a Monad constraint. We’ll get into this later. But first, let’s look at the steps that happen when we run the program, one piece at a time.

-- ... snip ...
        programRead 
            :: (Monad m)
            => StateT Char (ExceptT String m) ()
        programRead =
            runReaderT program''' 3

        programStated
            :: (Monad m)
            => ExceptT String m ((), Char)
        programStated =
            runStateT programRead 'a'

        programExcepted 
            :: (Monad m)
            => m (Either String ((), Char))
        programExcepted =
            runExceptT programStated

        programInIO 
            :: IO (Either String ((), Char))
        programInIO =
            programExcepted

     in do
        result <- programInIO
        case result of
            Left err -> do
                fail err
            Right ((), endState) -> do
                print endState
                pure ()

GHC doesn’t error on this!

When we finally get to programExcepted, we have a type that GHC can happily accept. The IO type has an instance of Monad, and so we can just substitute (Monad m) => m () and IO () without any fuss.

These are all of the steps, laid out explicitly, but we can condense them significantly.

program
    :: ( MonadReader Int m
       , MonadError String m
       , MonadState Char m
       )
    => m ()
program = do
    number
    woops
    update

main :: IO ()
main = do
    result <- runExceptT (runStateT (runReaderT program 3) 'a')
    case result of
        Left err -> do
            fail err
        Right ((), endState) -> do
            print endState
            pure ()

Plucking Constraints!

The general pattern here is:

  1. A function has many constraints.
  2. You can pluck a single constraint off by making the type a little more concrete.
  3. The rest of the constraints are delegated to the new type.

We don’t need to only do this in main. Suppose we want to discharge the MonadReader Int inside of program:

program
    :: ( MonadState Char m
       , MonadError String m
       )
    => m ()
program = do
    i <- gets fromEnum
    runReaderT number i
    woops
    update

We plucked the MonadReader constraint off of number directly and discharged it right there.

So you don’t have to just collect constraints until you discharge them in main. You can pluck them off one-at-a-time as you need to, or as it becomes convenient to do so.

How does it work?

Let’s look at ReaderT and MonadReader to see how the type and class are designed for plucking. We don’t need to worry about the implementations, just the types:

newtype ReaderT r m a

-- or, with explicit kinds,
newtype ReaderT
    (r :: Type)
    (m :: Type -> Type)
    (a :: Type)

class MonadReader r m | m -> r

instance (Monad m) => MonadReader r (ReaderT r m)

instance (MonadError e m) => MonadError e (ReaderT r m)
instance (MonadState s m) => MonadState s (ReaderT r m)

ReaderT, partially applied, as a few different readings:

-- [1]
ReaderT r       :: (Type -> Type) -> (Type -> Type)
-- [2]
ReaderT r m                       :: (Type -> Type)
-- [3]
ReaderT r m a                              :: Type
  1. With just an r applied, we have a ‘monad transformer.’ Don’t worry if this is tricky: just notice that we have something like (a -> a) -> (a -> a). At the value level, this might look something like:
     updatePlayer :: (Player -> Player) -> GameState -> GameState
    

    Where we can call updatePlayer to ‘lift’ a function that operates on Players to an entire GameState.

  2. With an m and an r applied, we have a ‘monad.’ Again, don’t worry if this is tricky. Just notice that we have something that fits the same shape that the m parameter has.
  3. Finally, we have a regular old type that has runtime values.

The important bit here is the ‘delegation’ type variable. For the class we know how to handle, we can write a ‘base case’:

instance (Monad m) => MonadReader r (ReaderT r m)

And for the classes that we don’t know how to handle, we can write ‘recursive cases’:

instance (MonadError e m) => MonadError e (ReaderT r m)
instance (MonadState s m) => MonadState s (ReaderT r m)

Now, GHC has all the information it needs to pluck a single constraint off and delegate the rest.

Plucking Errors

I mentioned that this technique can also be applied to errors. First, we need to write classes that work for our errors. Let’s say we have database, HTTP, and filesystem errors:

class AsDbError err where
    liftDbError :: DbError -> err
    isDbError :: err -> Maybe DbError

class AsHttpError err where
    liftHttpError :: HttpError -> err
    isHttpError :: err -> Maybe HttpError

class AsFileError err where
    liftFileError :: FileError -> err
    isFileError :: err -> Maybe FileError

Obviously, our ‘base case’ instances are pretty simple.

instance AsDbError DbError where
    liftDbError = id
    isDbError = Just

instance AsHttpError HttpError where
    liftHttpError = id
    isHttpError = Just

-- etc...

But we need a way of “delegating.” So let’s write our ‘error transformer’ type for each error:

data DbErrorOr err = IsDbErr DbError | DbOther err

data HttpErrorOr err = IsHttpErr HttpError | HttpOther err

data FileErrorOr err = IsFileErr FileError | FileOther err

Now, we can write an instance for DbErrorOr.

instance AsDbError (DbErrorOr err) where
    liftDbError dbError = IsDbErr dbError
    isDbError (IsDbErr e) = Just e
    isDbError (DbOther _) = Nothing

This one is pretty simple - it is also a ‘base case.’ Let’s write the recursive case:

instance AsHttpError err => AsHttpError (DbErrorOr err) where
    liftHttpError httpError = DbOther (liftHttpError httpError)
    isHttpError (IsDbErr _) = Nothing
    isHttpError (DbOther err) = isHttpError err

Here, we’re just writing some boilerplate code to delegate to the underlying err variable. We’d want to repeat this for every permutation, of course. Now, we can compose programs that throw varying errors:

program
    :: (AsHttpError e, AsDbError e)
    => Either e ()
program = do
    Left (liftHttpError HttpError)
    Left (liftDbError DbError)

The constraints collect exactly as nicely as you’d want, and the type class machinery allows you to easily go from the single type to the concrete type.

Let’s ‘pluck’ the constraint. We’ll ‘pick’ a concrete type and delegate the other constraint to the type variable:

program' 
    :: (AsHttpError e)
    => Either (DbErrorOr e) ()
program' = program

GHC is pretty happy about this. All the instances work out, and it solves the problem of how to delegate everything for you.

We can pattern match directly on this, which allows us to “catch” individual errors and discharge them:

handleLeft :: Either err a -> (err -> Either err' a) -> Either err' a
handleLeft (Right r) _ = Right r
handleLeft (Left l) f = f l

program'' :: AsHttpError e => Either e ()
program'' =
    handleLeft program $ \err ->
        case err of
            IsDbErr dbError ->
                Right ()
            DbOther dbOther ->
                Left dbOther

Voila! We’ve “handled” the database error, but we’ve delegated handling the HTTP error. The technique of ‘constraint plucking’ works out here.

Now, an astute reader might note that this technique is so boring. There’s so much boilerplate code!! SO MUCH!!!

Come on, y’all. It’s exactly the same amount of boilerplate code as the mtl library requires. Is it really that bad?

YESSSS!!!

Okay, yeah, it’s pretty bad. This encoding is primarily here to present the ‘constraint plucking’ technique. You can do a more general and ergonomic approach to handling errors like this, but describing it is out of scope for this post. I’ve published a library named plucky that captures this pattern, and the module documentation covers it pretty extensively.

Hopefully you find this concept as useful as I have. Best of luck in your adventures!

January 03, 2020 12:00 AM

January 02, 2020

Chris Smith 2

Formatting code in CodeWorld

I’ve made some big changes in the last few days to code formatting in CodeWorld. You can try this out with Ctrl-I in any editor window.

Formatting in Haskell mode (an aside)

On Halloween, I switched CodeWorld’s Haskell formatter from hindent to Ormolu. I suppose I should say say something about the reasons for the change.

First, hindent was no longer maintained, and it seemed like the community was moving in different directions. The other two popular formatting tools for Haskell are Brittany and Ormolu. I find a lot to like about Brittany, but unfortunately I was unable to use it because of its license. This led me to evaluate Ormolu.

I find a lot to like about Ormolu’s philosophy that is will keep and add consistency to programmer choices about formatting. My best experiences with autoformatting code come when I can type something like what I meant, and then hit a button and let the system just clean it up for me. I don’t much care whether the same AST always produces the same formatting (and, in fact, I don’t even believe the notion is well-defined), and if I see something that doesn’t look great, I love the idea that I can nudge things in a different direction, and the formatter will take the hint.

So I’m happy to jump on Ormolu’s bandwagon for a bit, particularly since this isn’t a production-critical use case. But Ormolu is definitely not yet ready for prime time. It mangles formatting in lots of ways, but the worst is removing all blank lines in do blocks, turning your well-written code into garbage. So use at your own risk. I felt like including a premature but promising project was a better choice than sticking with a dead end tool.

Formatting in the educational dialect

That’s the less interesting change, though. The other thing I’ve been working on is formatting in the educational dialect of CodeWorld. Here, no existing Haskell formatter will do the job, since this dialect has different conventions and requirements.

Some of the highlights:

  • The educational dialect uses uncurried functions, which should be written in standard math syntax like f(x), and not f (x) with a space. This is a minor point, but a big deal, and it was surprisingly tricky to retrofit into an existing formatting engine, since changes in columns in one line trickle down into alignment decisions in later lines.
  • Concerns like minimizing diffs don’t matter at all for this tool. Concerns like making the structure of code more apparent at a glance matter a lot. So alignment, in particular, is very valuable.

I already had an implementation from last summer of auto-indent for the educational dialect. It wasn’t always perfect, but teaching with it in the Fall semester dramatically improved student experiences, as students were much less likely to run into problems with layout and indents. Fernando Alegre filed a bug a few days ago complaining that it should be a lot better. (He said less opinionated, but I am convinced that what it needed was better opinions, not fewer opinions.)

So I started working on that. In the process, I ended up realizing that I can do a quick formatter for the educational dialect by just running the auto-indent algorithm on each line! Okay, there’s more subtlety than that: just auto-indenting can choose the wrong layout levels for some lines, so I first walk through the code, noting the layout level at the beginning of each line. Then I walk through again, run the auto-indent, and then ask it to increase or decrease the indent until the line is back at its correct layout level.

I did this originally just to quickly see the results of my auto-indent changes on a large body of code, and it was super helpful. I found a dozen or so little bugs and regressions that would have been missed otherwise. But when I was done, I realized that this is a really useful tool. I now plan to tell my students that before they ask for help on a parse error, they must run the formatter. I predict that much of the time, their mistake will be obvious after just that step.

My second lesson: formatting Haskell (even a small educational dialect) is not easy! I’m glad I have collected a large number of samples of code — my own, my students’, and others’ students’ — to test on, because they popped up all kinds of crazy corner cases and weird examples.

by Chris Smith at January 02, 2020 07:36 PM

January 01, 2020

Alson Kemp

A Bit of A Continuation for Moore’s Law?

Note: CPU references in this post are all to Intel CPU. Other CPU families took similar paths but did so with different timelines and trade-offs (e.g. the inclusion of FPU and/or MMU functionality in the CPU).

First, a historical ramble…

What follows is accurate enough for what follows…

Much as with so much on the web, Moore’s Law had a specific origin but has been through a number of updates/revisions/extensions to remain relevant to those who want it to remain relevant. Originally, it was about the number of transistors that could be built into a single semiconductor product. Presumably that number got awfully large and was meaningless to most people (transistor?), so Moore’s Law was sort of retooled to refer to compute capability (MIPS, FLOPS) or application performance (frames per second (in a 3D video game), TPC-* (for databases), etc. If your widget was getting faster, then there was “an [Moore’s Law] for that” (to paraphrase Apple). And Moore saw and he was pleased.

But really all the faster-being was, of course, under pinned by the various dimensions of scaling for semiconductors. Processors (the things most people care about the most) are made using MOSFETs (a very common type of transistor used to build processors/logic, but a bit different those in the original Moore’s Law) and Robert Dennard wrote a paper noting that MOSFETs have particular scaling properties. See Dennard Scaling: “[if] transistor dimensions could be scaled by 30% (0.7x) every technology generation, thus reducing their area by 50%. This would reduce circuit delays by 30% (0.7x) and therefore increase operating frequency by about 40% (1.4x). Finally, to keep the electric field constant, voltage is reduced by 30%, reducing energy by 65% and power (at 1.4x frequency) by 50%”. This was also known as “triple scaling” as it implied that three scaling factors would simultaneously improve: geometry decrease (density), frequency increase and power decrease (for equivalent functionality).

As Dennard scaling started to decrease [1] due to the effects of smaller geometries (leakage current increased so that smaller circuits “leaked” power and power leakage started to degrade the overall power benefits), to an inability to continue lowering voltage (again, degrading power improvements) and to frequency stabilization (signals still have to propagate across some distance and smaller transistors had a harder time doing so; your 2020 CPU isn’t going to be much higher frequency than your old 2010 processor) the focus moved on to multi-core processors/systems and to heterogeneous computing.

Multi-core systems came about as an increasing number of transistors was available but diminishing returns affected clockspeeds and raw instructions-per-cycle. L1, L2 and L3 caches were increased until they, too, produced diminishing returns. While systems had been multi-processor for a while, processors became multi-core as per-core performance started to flatten (with languages to catch up a bit later) and “excess” transistors were available with each product in an upgraded product line..

Heterogeneous computing refers to the CPU offloading specialized compute tasks to associated specialized logic blocks or coprocessors. In earlier days, this was often floating point processing: an optional 8087 FPU (floating point math) could optionally be installed alongside the 8088 CPU (memory access + logic + integer calculation). Then the FPU was bolted in to the x86 with the 80486. Then a common coprocessor use was for high performance networking, allowing commodity CPUs to offload network processing and to recover CPU cycles. Then FPGAs placed next to CPUs (in specialized cases) to provide highly customizable compute acceleration (e.g. video or audio encode/decode, Bitcoin mining, etc). (The niftiest application, IMHO, was the now defunct Leopard Logic (see also).) GPUs were also generally available so are also used as general purpose, though specialized, coprocessors. While the many variations on coprocessors could improve computational performance by peeling off ever larger chunks of traditional workloads, they, too, are limited by Dennard Scaling and by computational limits (e.g. a complete, optimized 32-bit FMA may not be able to be optimized further using additional transistors). GPUs temporarily circumvented this limitation by including ever more of their relatively simple compute units but the utility of further parallel units will diminish. Various companies are now pursuing specialized coprocessors for AI (see Google’s TPU).

In any case, we actually do appear to be reaching the end of Moore’s Law by continuously shrinking process geometries.

And now the main ramble…

What follows is utter speculation…

Are there ways to keep going with scaling? Perhaps 3D chip technologies can help but these, when used for logic, are tremendously limited by power dissipation (without a process shrink, 50% more transistors ~= 50% power). Other highly speculative techniques maybe produce non-CMOS-based logics.

One area that hasn’t received much attention is utilization (at least w.r.t. scaling). The hyperscalers are certainly paying attention to this factor but a significant (vast?) majority of the processors/co-processors sold worldwide run at very low utilization. Your mobile phone is powered down as much as possible (even if the screen is on); your laptop constantly cycles its clock frequency down in order to reduce power draw (and to avoid spinning the fan (more power)). So, around the world, huge amounts of processing power are sitting idle (on our phones, laptops, TVs and desks).

So what happens if we leave as little functionality on a phone or laptop as possible and leave the “computing” to the hyperscalers? We’re back to the good old days of thin-clients and thick-servers (sorta like “big iron” but more like heaps of “iron dust”). Assuming 75% of the world’s compute power is nearly un-utilized (~2.5%) and 25% is heavily utilized (I’ve heard from hyperscalers that 33% utilization is a good average for hyperscalers), then we’re at about 8% utilization for global compute. If “user” compute power (laptop, phone, etc) is constant (which it kinda is…) but demand for global compute continues rising, then hyperscalers have a significant incentive to increase utilization because additional compute is demanded and improved utilization has roughly no marginal cost (who doesn’t like free money?).

If the above obtains and compute deployment shifts and utilization improves, we might wind up with 25% of the world’s compute underutilized @ 10% and 75% of the world’s compute more heavily utilized @ 50%. Then global compute utilization would sit at roughly (0.25*10% + 0.75*50%) 40%. If this shift were achieved over 3-5 years, that would represent a 7-12% annual increase in free compute improvements with no process improvements. Certainly not Moore’s Law’s ~45% increase and the improvement isn’t that visible to users but, hey!, free faster-compute is still a good thing.


A friend pointed out that client compute burst demands (see browser Javascript, React and friends) are still increasing so client compute would benefit from further enhancement. True but, assuming that client compute capability doesn’t grow significantly, that compute, as noted, is bursty and could be migrated to and aggregated in servers/cloud compute. The on-demand, bursty client compute workloads would aggregate in the server/cloud compute as day cycles migrated across the globe and would be replaced by batch workloads as client workloads decline as night cycles migrated across the globe. Further, client workload efficiency could be improved by using more efficient (e.g. statically typed, compiled) computer languages. Besides, why would we want to fab, sell and buy (literally) tons of under-utilized silicon? I’d prefer to buy a cheaper phone and let the server handle the compute and deliver the final results. Hell, why not have the server do all of the work and deliver compressed streams of pixels (see: RDP and ITU-T T.128). In any case, if Moore’s Law starts to falter, your client compute experience will continue to improve.


[1] These notes on the end of Dennard Scaling are probably 75% accurate (it’s more complicated; some bit kept scaling; etc.) but that doesn’t really matter for this post; Dennard scaling is dead or greatly slowed.

by alson at January 01, 2020 02:50 AM

December 30, 2019

Monday Morning Haskell

Happy New Years from MMH!

fireworks.jpg

Tomorrow is New Year's Eve, so this week we're celebrating with a review of everything we've done with year! This was a busy year where we broached a few new topics. We looked a lot at a couple areas where we don't hear much about Haskell development. Let's do a quick recap.

2019 In Review

We started this year by going over some of the basic concepts in Haskell data types. We compared these ideas with data in other languages. This culminated in the release of our Haskell Data Series. This series is a good starting point for Haskell beginners. It can help you get more familiar with a lot of Haskell concepts, by comparison to other languages.

For much of the spring and summer, we then focused on game development with the Gloss library. The end product of this was our Maze Game. It can teach some useful Haskell skills and provides some cheap entertainment!

During the fall, we then used this game as a platform to learn more about AI and machine learning. We explored a couple algorithms for solving our maze by hand. We tried several different approaches to machine-learn an algorithm for the task. We went through quite a bit of ML theory, even if the results we're always great in practice! Most of the code from this series is in our MazeLearner repository.

We closed out the year by exploring the Rust programming language. This language is like a mix of Haskell and C++. It's syntax is more like C++, but it incorporates many good functional ideas. The result is a language that has many of the nice things we've come to expect from Haskell land. But it's also performant and very accessible to a wider audience.

Upcoming Plans

This next year, we've got a lot planned! We're planning on some major new offerings in our online course selection! There are currently two out there. There's the free Stack mini-course (open now). Then there's the Haskell From Scratch beginner's course, which will reopen soon.

We're now close to launching a new course showcasing some of the more practical uses for Haskell! We're expecting to launch that within the next couple months! We've also got plans in the works for some smaller courses that should go live later in the year.

In the short-term, we've got a few different things planned for the blog as well. We're going to retool the website to give a better appearance. especially for code. We'll also look to have tighter integration of code samples with Github. Expect to see these updates on our permanent content soon!

Topic-wise, you can expect a wide variety of content. We'll spend some time at the start of the year going back to the basics, as we usually do. We'll move on to some more practical elements of Haskell. We'll also come back to AI and machine learning. We'll look at some very simple games, and generalize the idea of agents within them. We'll use these simple games as an easier platform to make complex algorithms work.

Conclusion

There's a lot planned as we move into our 4th year of Haskell! So stay tuned! And remember to subscribe to our mailing list! This will give you access to our Subscriber Resources! You'll also get our monthly newsletter, so you know what's happening!

by James Bowen at December 30, 2019 03:30 PM

FP Complete

Teaching Haskell with Duet

Teaching Haskell with Duet

Teaching Haskell to complete beginners is an enjoyable experience. Haskell is foreign; many of its features are alien to other programmers. It's purely functional. It's non-strict. Its type system is among the more pervasive of other practical languages.

Simple at the core

Haskell's core language is simple, though. It shares this with Lisp. Structured and Interpretation of Computer Programs by MIT teaches Lisp beginning with a substitution model for function application. This turns out to work well for Haskell too. This is how I've been teaching Haskell to beginners at FP Complete for our clients.

For example, in SICP, they use the example:

(+ (square 6) (square 10))

which reduces the function square to

(+ (* 6 6) (* 10 10))

which reduces by multiplication to:

(+ 36 100)

and finally to

136

As they note in SICP,

The purpose of the substitution is to help us think about procedure application, not to provide a description of how the interpreter really works. Typical interpreters do not evaluate procedure applications by manipulating the text of a procedure to substitute values for the formal parameters.

That this, this is a model, it's not the real thing. In fact, if you really eyeball the very first step, you might wonder which (square ..) argument is evaluated first between the two. Scheme doesn't specify an argument order; it varies. An implementation may even inline the whole thing.

Rather, if we think about programs in terms of a simple sequence of rewrites, we get a lot of bang for our buck in terms of reasoning and understanding.

The right language to model

Motivated by this goal, I started thinking about automating this process, so that students could use this model more readily, and see the shape of functions and algorithms visually. The solution I came up with was a new language that is a subset of Haskell, which I'll cover in this post.

The reason that it's not full Haskell is that Haskell has a lot of surface-level syntactic sugar. Evaluating the real language is complicated and infeasible. The following contains too many things at once to consider:

quicksort1 :: (Ord a) => [a] -> [a]
quicksort1 [] = []
quicksort1 (x:xs) =
  let smallerSorted = quicksort1 [a | a <- xs, a <= x]
      biggerSorted = quicksort1 [a | a <- xs, a > x]
  in  smallerSorted ++ [x] ++ biggerSorted

Pattern matches at the definition, list syntax, comprehensions, lets. There's a lot going on here that makes a newbie's eyes glaze over. We have to start simpler. But how simple?

GHC Haskell has a tiny language called Core to which all Haskell programs compile down to. Its AST looks roughly like this:

data Expr
  = App Expr Expr
  | Var Var
  | Lam Name Type Expr
  | Case Expr [Alt]
  | Let Bind Expr
  | Lit Literal

Evaluation of Core is simple. However, Core is also a little too low-level, because it puts polymorphic types and type-class dictionaries as normal arguments, inlines a lot of things, looks underneath boxed types like Int (into I#), and adds some extra capabilities normal Haskell doesn't have that are only appropriate for a compiler writer to see. The above function compiled to Core starts like this:

quicksort1
  = \ (@ a_a1Zd) ($dOrd_a1Zf :: Ord a_a1Zd) (ds_d22B :: [a_a1Zd]) ->
      case ds_d22B of {
        [] -> GHC.Types.[] @ a_a1Zd;
        : x_a1sG xs_a1sH ->
          ++
            @ a_a1Zd
            (quicksort1
               @ a_a1Zd
               $dOrd_a1Zf
               (letrec {
                  ds1_d22C [Occ=LoopBreaker] :: [a_a1Zd] -> [a_a1Zd]
           ...

We have to explain the special list syntax, module qualification, polymorphic types, dictionaries, etc. all in one go, besides the obvious challenge of the unreadable naming convention. Core is made for compilers and compiler writers, not for humans.

Duet

Therefore I took a middle way. Last year I wrote a language called Duet, which is a Haskell subset made specifically for teaching at this period of learning Haskell. Duet only has these language features: data types, type-classes, top-level definitions, lambdas, case expressions, and some literals (strings, integrals, rationals). Its main feature is steppability; the ability to step through the code. Every step produces a valid program.

Returning to the SICP example with our new tool, here's the same program in Duet:

square = \x -> x * x
main = square 6 + square 10
chris@precision:~/Work/duet-lang/duet$ duet run examples/sicp.hs
(square 6) + (square 10)
((\x -> x * x) 6) + (square 10)
(6 * 6) + (square 10)
36 + (square 10)
36 + ((\x -> x * x) 10)
36 + (10 * 10)
36 + 100
136

Here we see a substitution model in action. Each line is a valid program! You can take any line from the output and run it from that point.

Unlike Scheme, Duet picks an argument order (left-to-right) for strict functions like integer operations.

Note: You can follow along at home by creating a file and running it using docker run on Linux, OS X or Windows.

Folds

Let's turn our attention to the teaching of folds, which is a classic hurdle to get newbies through, as it is a kind of forcing function for a variety of topics.

The right fold is clasically defined like this:

foldr f z []     = z
foldr f z (x:xs) = f x (foldr f z xs)

This is not a valid Duet program, because (1) it uses list syntax (lists aren't special), and (2) it uses case analysis at the declaration level. If you try substitution stepping these, you quickly arrive at an awkward conversation about the difference between the seemingly three-argument function foldr, and lambdas, partial application, currying, and pattern matching, and whether we're defining two functions or one. Here is the same program in Duet:

data List a = Nil | Cons a (List a)
foldr = \f -> \z -> \l ->
  case l of
    Nil -> z
    Cons x xs -> f x (foldr f z xs)

At the end of teaching the substitution model, I cover that \x y z is syntactic sugar for \x -> \y -> \z -> ..., but only after the intuition has been solidified that all Haskell functions take one argument. They may return other functions. So the updated program is:

data List a = Nil | Cons a (List a)
foldr = \f z l ->
  case l of
    Nil -> z
    Cons x xs -> f x (foldr f z xs)

Which is perfectly valid Haskell, and each part of it can be rewritten predictably.

Let's look at comparing foldr with foldl.

data List a = Nil | Cons a (List a)
foldr = \f z l ->
  case l of
    Nil -> z
    Cons x xs -> f x (foldr f z xs)
foldl = \f z l ->
  case l of
    Nil -> z
    Cons x xs -> foldl f (f z x) xs
list = Cons 1 (Cons 2 Nil)

Folds at a glance

For a quick summary, we can use holes like in normal Haskell indicated by _ or _foo. In Duet, these are ignored by the type system and the stepper, letting you run the stepper with holes in too. They don't result in an error, so you can build up expressions with them inside.

main_foldr = foldr _f _nil list
main_foldl = foldl _f _nil list
list = Cons 1 (Cons 2 (Cons 3 (Cons 3 Nil)))

(I increased the size of the list for a longer more compelling output.)

We can pass --concise which is a convenience flag to literally filter out intermediate steps (cases, lambdas) which helps us see the "high-level" recursion. This flag is still under evaluation (no pun intended), but is useful here. Full output is worth studying with students too, but is too long to fit in this blog post. I will include a snippet from a non-concise example below.

The output looks like this:

$ duet run examples/folds-strictness.hs --main main_foldr --concise
foldr _f _nil list
_f 1 (foldr _f _nil (Cons 2 (Cons 3 (Cons 4 Nil))))
_f 1 (_f 2 (foldr _f _nil (Cons 3 (Cons 4 Nil))))
_f 1 (_f 2 (_f 3 (foldr _f _nil (Cons 4 Nil))))
_f 1 (_f 2 (_f 3 (_f 4 (foldr _f _nil Nil))))
_f 1 (_f 2 (_f 3 (_f 4 _nil)))
$ duet run examples/folds-strictness.hs --main main_foldl --concise
foldl _f _nil list
foldl _f (_f _nil 1) (Cons 2 (Cons 3 (Cons 4 Nil)))
foldl _f (_f (_f _nil 1) 2) (Cons 3 (Cons 4 Nil))
foldl _f (_f (_f (_f _nil 1) 2) 3) (Cons 4 Nil)
foldl _f (_f (_f (_f (_f _nil 1) 2) 3) 4) Nil
_f (_f (_f (_f _nil 1) 2) 3) 4

We can immediately see what the "right" part of foldr means. Experienced Haskellers can already see the teaching opportunities sprouting from the earth at this point. We're using O(n) space here, building nested thunks, or using too much stack. Issues abound.

Meanwhile, in foldl, we've shifted accumulation of the nested thunks to an argument of foldr, but at the end, we still have a nested thunk. Enter strict left fold!

We also see the argument order come into play: _f is applied to 1 first in foldr (_f 1 (foldr ...)), but last in foldl (_f (_f _nil 1) ..., which is another important part of understanding the distinction between the two.

Strict folds

To see the low-level mechanics, and as a precursor to teaching strict fold, we ought to use an actual arithmetic operation (because you can't strictly evaluate a _ hole, by definition, it's missing):

main_foldr = foldr (\x y -> x + y) 0 list
main_foldl = foldl (\x y -> x + y) 0 list

Both folds eventually yield:

1 + (2 + 0)
1 + 2
3

And:

((\x y -> x + y) 0 1) + 2
((\y -> 0 + y) 1) + 2
(0 + 1) + 2
1 + 2
3

(Here you can also easily see where the 0 lies in the tree.)

Which both have the built up thunk problem mentioned above.

Duet has bang patterns, so we can define a strict fold like this:

data List a = Nil | Cons a (List a)
foldr = \f z l ->
  case l of
    Nil -> z
    Cons x xs -> f x (foldr f z xs)
foldl = \f z l ->
  case l of
    Nil -> z
    Cons x xs -> foldl f (f z x) xs
foldl_ = \f z l ->
  case l of
    Nil -> z
    Cons x xs ->
      case f z x of
        !z_ -> foldl_ f z_ xs
list = Cons 1 (Cons 2 Nil)
main_foldr = foldr (\x y -> x + y) 0 list
main_foldl = foldl (\x y -> x + y) 0 list
main_foldl_ = foldl_ (\x y -> x + y) 0 list

(We don't allow ' as part of a variable name, as it's not really necessary and is confusing for non-Haskeller beginners. An undercore suffices.)

Now, looking in detail without the --concise arg, just before the recursion, we see the force of the addition:

case Cons 1 (Cons 2 Nil) of
  Nil -> 0
  Cons x xs ->
    case (\x y -> x + y) 0 x of
      !z_ -> foldl_ (\x y -> x + y) z_ xs
case (\x y -> x + y) 0 1 of
  !z_ -> foldl_ (\x y -> x + y) z_ (Cons 2 Nil)
case (\y -> 0 + y) 1 of
  !z_ -> foldl_ (\x y -> x + y) z_ (Cons 2 Nil)
case 0 + 1 of
  !z_ -> foldl_ (\x y -> x + y) z_ (Cons 2 Nil)
case 1 of
  !z_ -> foldl_ (\x y -> x + y) z_ (Cons 2 Nil)
foldl_ (\x y -> x + y) 1 (Cons 2 Nil)

And finally, taking a glance with --concise, we see:

$ duet run examples/folds-strictness.hs --main main_foldl_ --concise
foldl_ (\x y -> x + y) 0 list
foldl_ (\x y -> x + y) 1 (Cons 2 (Cons 3 (Cons 4 Nil)))
foldl_ (\x y -> x + y) 3 (Cons 3 (Cons 4 Nil))
foldl_ (\x y -> x + y) 6 (Cons 4 Nil)
foldl_ (\x y -> x + y) 10 Nil
10

Which spells out quite clearly that now we are: (1) doing direct recursion, and (2) calculating the accumulator with each recursion step (0, 1, 3, 6, 10).

Concluding

This post serves as both knowledge sharing for our team and a public post to show the kind of detailed level of training that we're doing for our clients.

If you'd like Haskell training for your company, contact us to arrange a meeting.

Want to read more about Haskell? Check out our blog and our Haskell homepage.

Signup for our Haskell mailing list

December 30, 2019 05:18 AM

December 26, 2019

Matt Parsons

Write Junior Code

A plea to Haskellers everywhere.

Haskell has a hiring problem.

There aren’t many Haskell jobs, and there aren’t many Haskell employees. Haskell employees tend to be senior engineers, and the vast majority of job ads want senior-level Haskell candidates. The vast majority of Haskell users do not have any professional production experience, and yet almost every job wants production Haskell experience.

Why is this the case?

We write fancy code. Here’s a familiar story:

Boss: You’re going to be allowed to make a new project in whatever language you want. Even Haskell.

Employee: Oh yeah!! Time to write FANCY HASKELL!!

Employee writes a ton of really fancy Haskell, delivers fantastically and in about 1000 lines of code. Everyone is very impressed. The project grows in scope.

Boss: It’s time to hire another Haskeller. What are the job requirements?

Employee: Oh, they’ll need to know type-level programming, lenses, servant, Generics, monad transformers, mtl, and advanced multithreading in order to be productive anytime soon.

The boss then has trouble hiring anyone with that skill set. The project can’t really grow anymore. Maybe the original employee left, and now they have a legacy Haskell codebase that they can’t deal with. Maybe the original employee tried to train others on the codebase, but there was too much to teach before anyone could do anything productively.

Finally, someone gets hired on - they have several years of Production Haskell under their belts. But where did they come from? Another Haskell company, most likely! Now that company has a job to fill. Where do they fill it from? The same pool of candidates that they just lost someone to! They can’t hire juniors or train folks for the same reasons.

Break The Cycle

This coming year, let’s break the cycle.

Let’s write junior code.

Let’s write simple, basic, easy Haskell.

Let’s get bogged down with how much simple code we write.

Let’s make jobs for juniors.

Let’s hire juniors.

Let’s train those juniors into seniors.

Let us grow Haskell in industry by writing simpler code and making room for the less experienced.

Let’s not delete all of our fancy code - it serves a purpose! Let’s make it a small part of our codebase, preferably hidden in libraries with nice simple interfaces. Let us share the joy and wonder of an Actually (Mostly) Good programming language with the people that haven’t had the privilege and opportunity to work for years in it already.

Addendum:

Kudos to Marco Sampellegrini who wrote basically the same post as me today. Kudos to Michael Snoyman who has been championing Boring Haskell for a while. And kudos to everyone else who has contributed to making Haskell easy, and not just powerful/fun/flexible/fast/amazing.

December 26, 2019 12:00 AM

December 24, 2019

FP Complete

Async Exceptions in Haskell, and Rust

Before getting started: no, there is no such thing as an async exception in Rust. I'll explain what I mean shortly. Notice the comma in the title :).

GHC Haskell supports a feature called asynchronous (or async) exceptions. Normal, synchronous exceptions are generated by the currently running code from doing something like trying to read a file that doesn't exist. Asynchronous exceptions are generated from a different thread of execution, either another Haskell green thread, or the runtime system itself.

Perhaps the best example of using async exception is the timeout function. This function will take a certain number of microseconds and an action to run. If the action completes in that time, all is well. If the action doesn't complete in that time, then the thread running that action receives an async exception.

Rust does not have exceptions at all, much less async exceptions. (Yes, panics behave fairly similarly to synchronous exceptions, but we'll ignore those in this context. They aren't relevant.) Rust also doesn't have a green thread-based runtime like Haskell does. There's basically no direct way to compare this async exception concept from Haskell into Rust.

Or, at least, there wasn't. With Tokio, async/.await, executor, tasks, and futures, the story is quite different. A Haskell green thread looks quite a bit like a Rust task. Suddenly there's a timeout function in Tokio. This post is going to compare the Haskell async exception mechanism to whatever powers Tokio's timeout. It's going to look at various trade-offs of the two different approaches. And I'll end with my own personal analysis.

Async exceptions in Haskell

The GHC Haskell runtime provides a green thread system. This means that there is a scheduler which assigns different green threads to actual OS threads to run on. These threads continue operating until they hit yield points. A common example of a yield point would be socket I/O. Take the pseudocode below:

socket <- openConnection address
send socket "Hello world!" -- yields
msg <- recv socket -- yields
putStrLn ("Received message: " ++ show msg)

Each time we perform what looks like blocking I/O in Haskell, in reality we are:

  • Registering a wakeup call with the scheduler when the socket completes its send or receive
  • Putting the current green thread to sleep
  • We'll get woken up again when the scheduler has a free OS thread and there is data on the socket

However, yield points happen far more often than just async I/O. Every time we perform any allocation, GHC automatically inserts a yield point. Since Haskell (unfortunately) tends to do a lot of heap allocation, this means that our code is implicitly littered with huge numbers of yield points. So much so, that we can essentially assume that at any point in our execution, we may hit a yield point.

And this brings us to async exceptions. Each green thread has its own queue of incoming async exceptions. And at each yield point, the runtime system will check if there are exceptions waiting on that queue. If so, it will pop one off the queue and throw it in the current green thread, where it can either be caught or, ultimately, take down the entire thread.

My best practice advice is to never recover from an async exception. Instead, you should only ever clean up your resources when an async exception occurs. In other words, if you ever catch an async exception, you may do some cleanup, but then you must immediately rethrow the exception.

Since an async exception can occur anywhere, we have to be highly paranoid when writing resource-safe code in Haskell. For example, consider this pseudocode:

h <- openFile fp WriteMode
setPerms 0o600 h `onException` closeFile h
useFile h `finally` closeFile h

In a world without async exceptions, this is exception safe. We first open the file. If opening throws an exception, then the openFile call itself is responsible for releasing any resources it acquires. Next, if setPerms throws an exception, our onException call ensures that closeFile will close the file handle. And finally, when we call useFile, we use finally to ensure that closeFile will be called regardless of an async exception occurring.

However, in a world with async exceptions, lots more can go wrong:

  • An exception can be generated between the call to openFile and setPerms, where there's not exception handler.
  • An exception can be generated between the call to setPerms and useFile

Instead, in Haskell, we have to mask async exceptions, which temporarily stops them from being delivered. The code above could be written as:

mask $ \restore -> do
    h <- openFile fp WriteMode
    setPerms 0o600 h `onException` closeFile h
    restore (useFile h) `finally` closeFile h

However, dealing with masking states is really complicated in general. So instead, we like to use helper functions like bracket:

bracket (openFile fp WriteMode) closeFile $ \h -> do
    setPerms 0o600 h
    useFile h

There are many more details around implementation and usage of async exceptions in Haskell, but this is sufficient for our comparison for now.

Canceled futures in Rust

The Future trait in Rust defines an abstraction for anything that can be awaited on. The core function is poll, which works something like this:

  • Tell me if you're ready
  • If you are ready, great! Tell me the completed value
  • If you're not ready, I want to register a Waker

The Waker can then interact with the executor to make sure that the task which is awaiting gets woken up when the Future is ready.

In a simple async application in Rust, you'll have a task that waits on one Future at a time. For example, in pseudocode again:

async {
    let socket = open_connection(&address);
    socket::send("Hello world!").await;
    let msg = socket::recv().await;
    println!("Received message: {}", msg);
}

Each of those awaits is a yield point. The executor can allow another task to run, and will wake up the current task when the I/O is complete. This is very similar to the Haskell example I gave above.

However, unlike Haskell:

  • There is no queue of async exceptions sitting and waiting to kill our task
  • There are no implicit yield points created by allocation

If there are no async exceptions, how exactly does a timeout work in Rust? Well, instead of a task waiting for a single Future to complete, it waits for one of two Futures to complete. You can check out the code yourself, but the basic idea is:

  • Create two Futures
    • The action you want to try to run
    • A timer that will complete when the timeout has expired
  • Whenever we poll to see if things are ready:
    • Check if the action is ready. If so: yay! Return its result as an Ok
    • Check if the timer is ready. If so: our timeout has expired, and we should return an Err saying how much time has elapsed.
    • If neither is ready, say that we're not ready either and wait to get woken up again

Personally, I think this is a pretty elegant solution to the problem. Like the Haskell solution, it means that the action can only be stopped at a yield point. However, unlike the Haskell solution, yield points will be far less common in a Rust program, since we don't have the implicit sprinkling of yields caused by allocation.

But now, let's talk about resource management. I made it clear that properly handling resources in the presence of async exceptions in Haskell is tricky. Not so in Rust! The standard way to handle resources is with RAII: you define a data type and stick a Drop on it. And in the world of cancellable Futures, this all works perfectly:

  • The Future itself owns any resources it's using
  • If the timeout triggers before the action completed, the Future in question is dropped
  • When the Future is dropped, the resources it owns are also dropped

The example below is more verbose than the Haskell equivalent above, but that's because we're defining a synthetic Resource struct. In real life code, such structs would likely already exist.

NOTE: You'll need at least Rust 1.39 to run the code below, and add a dependency on Tokio with a line like: tokio = { version = "0.2", features = ["macros", "time"] }.

use tokio::time::{delay_for, timeout};
use std::time::Duration;

struct Resource;

impl Resource {
    fn new() -> Self {
        println!("acquire");
        Resource
    }
}

impl Drop for Resource {
    fn drop(&mut self) {
        println!("release");
    }
}

async fn worker() {
    let _resource = Resource::new();
    for i in 1..=10 {
        delay_for(Duration::from_millis(100)).await;
        println!("i == {}", i);
    }
}

#[tokio::main]
async fn main() {
    println!("Round 1");
    let res = timeout(Duration::from_millis(2000), worker()).await;
    println!("{:?}", res);

    println!("\n\nRound 2");
    let res = timeout(Duration::from_millis(1000), worker()).await;
    println!("{:?}", res);

    println!("\n\nRound 3");
    let res = timeout(Duration::from_millis(500), worker()).await;
    println!("{:?}", res);
}

My analysis

The big point in Haskell's favor in all of this is its ability to preempt inside of computations. Whereas Rust's model lets you preempt most I/O actions, there won't be many yield points in other code. This can lead to lots of accidental blocking. There has been some discussion recently about possible mitigations of this issue at the executor level.

Haskell's advantage here is diminished by the fact that, if you have code that does not allocate any memory, you don't get any yield points. However, in practice, this almost never happens. This did affect some of my coworkers recently, so it's not unheard of. But it's relatively rare, and you can insert yield points back into an optimized application with -fno-omit-yields. You can argue that the fact that this sometimes fails spectacularly is even worse.

I like the fact that, in Rust, you know exactly where your program may simply stop executing. Every time you see a .await, you know "well, it's entirely possible that the executor will just drop me before I come back." And the fact that ownership, RAII, and dropping solves resource management exactly the same in async and synchronous Rust code is beautiful.

Haskell pays a lot for the ability to kill threads with async exceptions. Every bit of code that manages resources needs to pay a cost in cognitive overhead. In practice, this truly does lead to a large number of bugs. Figuring out how and when to mask exceptions, and whether to have interruptible or uninterruptible masking (something I didn't really discuss), is another major curve ball. I think proper API design can mitigate a lot of the pain here. But the base library does not contain such API design, and bad practices abound.

And finally, a question: how important is cancellable tasks/killable threads in practice? Being able to time things out is certainly powerful, in some cases. Racing two actions to see which one completes first? Less valuable in my opinion. I certainly teach it when I give Haskell training, but there were usually more elegant ways to solve the same problem.

Since I'm stuck with async exceptions, I'll use timeout and race in Haskell, because using them isn't the dangerous part, it's having them in the first place. Were I to design a runtime system for Haskell from the ground up, I'm not sure I'd introduce the concept. It certainly solves some really tricky problems, like interrupting long-running pure code. But I'm not convinced the feature really pulls its weight.

On the other hand, in Rust, the feature is essentially free. The Future trait was designed to solve a bunch of general problems, and then at the library level it's possible to introduce a solution to cancel tasks. Pretty nifty.

And finally, where these two languages are the same. They both elegantly and easily solve async I/O problems in general. You get to write blocking-style code without the blocking. And both of them have pretty complicated details under the surface (Haskell: masking, Rust: the poll method) which we can usually, and fortunately, ignore and leave to others to mess around with.

Further reading

Feel free to check out our Haskell and Rust homepages for lots more content. If you're interested in learning all about exception handling in Haskell, check out our safe exception handling tutorial. And if you want to learn about async/await in Rust, I'd recommend lessons 8 and 9 of the Rust Crash Course.

Free engineering consultation with FP Complete

Signup for our Haskell mailing list

Signup for our Rust mailing list

December 24, 2019 05:56 AM

December 19, 2019

FP Complete

Serverless Rust using WASM and Cloudflare

I run a website for Haskellers. People are able to put their email addresses on this website for others to contact them. These email addresses were historically protected by Mailhide, which would use a Captcha to prevent bots from scraping that information. Unfortunately, Mailhide was shut down. And from there, Sorta Secret was born.

Sorta Secret provides a pretty simple service, as well as a simple API. Using the encrypt endpoint, you can get an encrypted version of your secret. Using the show endpoint, you can get a webpage that will decrypt the information after passing a Recaptcha. That's basically it. You can go to my Haskellers profile and click "Reveal email address" to see this in action.

I originally wrote Sorta Secret a year ago in Rust using actix-web and deployed it, like most services we write at FP Complete, to our Kubernetes cluster. When Rust 1.39 was released with async/await support, and then Hyper 0.13 was released using that support, I decided I wanted to try rewriting against Hyper. But that's a story for another time.

After that, more out of curiosity than anything else, I decided to rewrite it as a serverless application using Cloudflare Workers, a serverless platform that supports Rust and WASM. To quote the Cloudflare page on the topic:

Serverless computing is a method of providing backend services on an as-used basis. A Serverless provider allows users to write and deploy code without the hassle of worrying about the underlying infrastructure.

This post will describe my experiences doing this, what I thought worked well (and not so well), and why you may consider doing something like this yourself.

Signup for our Rust mailing list

Advantages

Let me start off with the major advantages of using Cloudflare Workers over my previous setup:

  • Geographic distribution A typical hosting setup, including the Kubernetes cluster I deploy to, is set up in a single geographic location. For an embarrassingly parallel application like this, having your code run in all of Cloudflare's data centers is pretty awesome.
  • Setup time/cost I already have access to a Kubernetes cluster. But for someone who doesn't already have a preexisting server or cluster to deploy their service, the time to set up a secure, high availability deployment environment, and the cost of running these machines, can be high. I'm currently paying $0 to host this service on Cloudflare.
  • Ease of testing/deployment The Cloudflare team has done a great job with the Wrangler tool. Deploying an update is a call to wrangler publish. I can do testing with wrangler preview --watch. This is pretty awesome. And the publishing is fast.

Disadvantages

There are definitely some hurdles to overcome along the way.

  • Lack of examples I found it very difficult to get even basic things working correctly. I'm hoping this post helps with that.
  • WASM libraries didn't work perfectly Most libraries designed to help with WASM are targeted at the browser. In a Cloudflare Worker, for example, there's no Window. Instead, to call fetch, I needed a ServiceWorkerGlobalScope.
  • Slower dev cycle than I like While wrangler preview is awesome, it still takes quite a bit of time to see a change. Each code change requires recompiling the Rust code, packaging up the bundle, sending it to Cloudflare, and a refresh of the page. Especially since I was using compile-time-checked HTML templates, this ended up being pretty slow.
  • Secrets management Unlike Kubernetes, there's no built in secrets management in Cloudflare Workers. Someone on the Cloudflare team advised me that I could use their key/value store for secrets. I elected to be really dumb and compile the secrets (encryption key and Recaptcha secret key) directly into the executable.
  • Difficult debugging It seems that the combination of async code, panics, and the bridge to JavaScript results in error messages getting completely dropped, which makes debugging very difficult.

That's enough motivation and demotivation for now. Let's see how this all fits together.

Getting started

The Cloudflare team has put together a very nice command line tool, wrangler, which happens to be written in Rust. Getting started with a brand new Cloudflare Workers Rust project is nice and easy, you don't even need to set up an account or provide any credentials.

cargo install wrangler
wrangler generate wasm-worker https://github.com/cloudflare/rustwasm-worker-template.git
cd wasm-worker
wrangler preview --watch

The problem is that this template doesn't do much. There's a Rust function called greet that returns a String. That Rust function is exposed to the JavaScript world via wasm-bindgen. There's a small JavaScript wrapper that imports that function and calls it when a new request comes in. However, we want to do a lot more in this application:

  • Perform routing inside Rust
  • Perform async operations (specifically making requests to the Recaptcha server)
  • Generating more than just 200 success status responses
  • Parse submitted JSON bodies
  • Use HTML templating

So let's dive down the rabbit hole!

wasm-bindgen

I've played with WASM a bit before this project, but not much. Coming up to speed with wasm-bindgen was honestly pretty difficult for me, and involved a lot of trial-and-error. Ultimately, I discovered that I could probably get away with one of two approaches for the binding layer between the JavaScript and Rust worlds:

  1. Have a thin wrapper in JavaScript that produces simple JSON objects, and then use serde inside Rust to turn those into nice structs
  2. Use the Request and Response types in web-sys directly

I discovered the first approach first, and went with it. I briefly played with moving over to the second approach, but it involved a lot of overhaul to the code, so I ended up sticking with my approach 1. Those more skilled with this may disagree with this approach. Anyway, here's what the JavaScript half of this looks like:

const { respond_wrapper } = wasm_bindgen;
await wasm_bindgen(wasm)

var body;
if (request.body) {
    body = await request.text();
} else {
    body = "";
}

var headers = {};
for(var key of request.headers.keys()) {
    headers[key] = request.headers.get(key);
}

const response = await respond_wrapper({
    method: request.method,
    headers: headers,
    url: request.url,
    body: body,
})
return new Response(response.body, {
    status: response.status,
    headers: response.headers,
})

Some interesting things to note here:

  • I'm pulling in the entire request body as a string. That works for our case (the only request body is form data), but isn't intelligent enough in general.
  • The respond_wrapper itself is returning a Promise on the JavaScript side. We're about to see some wasm-bindgen awesomeness.
  • There's not much work to convert between the simplified JSON values and the real JavaScript objects.

Now let's look at the Rust side of the equation. First we've got our Request and Response structs with appropriate serde deriving:

#[derive(Deserialize)]
pub struct Request {
    method: String,
    headers: HashMap<String, String>,
    url: String,
    body: String, // should really be Vec<u8>, I'm cheating here
}

#[derive(Serialize)]
pub struct Response {
    status: u16,
    headers: HashMap<String, String>,
    body: String,
}

Within the Rust world we want to deal exclusively with these types, and so our application lives inside a function with signature:

async fn respond(req: Request) -> Result<Response, Box<dyn std::error::Error>>

However, we can't export that to the JavaScript world. We need to ensure that our input and output types are things wasm-bindgen can handle. And to achieve that, we have a wrapper function that deals with the serde conversions and displaying the errors:

#[wasm_bindgen]
pub async fn respond_wrapper(req: JsValue) -> Result<JsValue, JsValue> {
    let req = req.into_serde().map_err(|e| e.to_string())?;
    let res = respond(req).await.map_err(|e| e.to_string())?;
    let res = JsValue::from_serde(&res).map_err(|e| e.to_string())?;
    Ok(res)
}

A wasm-bindgen function can accept JsValues (and lots of other types), and can return a Result<JsValue, JsValue>. In the case of an Err return, we'll get a runtime exception in the JavaScript world. We make our function pub so it can be exported. And by marking it async, we generate a Promise on the JavaScript side that can be awaited.

Other than that, it's some fairly standard serde stuff: converting from a JsValue into a Request via its Deserialize and converting a Response into a JsValue via its Serialize. In between those, we call our actual respond function, and map all error values into a String representation.

Routing

Our respond function receives a Request, and that Request has a url: String field. I was able to pull in the url crate directly, and then use its Url struct for easier processing:

let url: url::Url = req.url.parse()?;

Also, I wanted all requests to land on the www.sortasecret.com subdomain, so I added a bare domain redirect:

fn redirect_to_www(mut url: url::Url) -> Result<Response, url::ParseError> {
    url.set_host(Some("www.sortasecret.com"))?;
    let mut headers = HashMap::new();
    headers.insert("Location".to_string(), url.to_string());
    Ok(Response {
        status: 307,
        body: format!("Redirecting to {}", url),
        headers,
    })
}

if url.host_str() == Some("sortasecret.com") {
    return Ok(redirect_to_www(url)?);
}

This is already giving us some nice type safety guarantees from the Rust world, which I'm very happy to take advantage of. Next comes the routing itself. If I was more of a purist, I would make sure I was checking the request methods correctly, returning 405 "bad method" responses in some cases, and so on. Instead, I went for a very hacky implementation:

Ok(match (req.method == "GET", url.path()) {
    (true, "/") => html(200, server::homepage_html()?),
    (true, "/v1/script.js") => js(200, server::script_js()?),
    (false, "/v1/decrypt") => {
        let (status, body) = server::decrypt(&req.body).await;
        html(status, body)
    }
    (true, "/v1/encrypt") => {
        let (status, body) = server::encrypt(&req.url.parse()?)?;
        html(status, body)
    }
    (true, "/v1/show") => {
        let (status, body) = server::show_html(&req.url.parse()?)?;
        html(status, body)
    }
    (_method, path) => html(404, format!("Not found: {}", path)),
})

Which relies on some helper functions:

fn html(status: u16, body: String) -> Response {
    let mut headers = HashMap::new();
    headers.insert("Content-Type".to_string(), "text/html; charset=utf-8".to_string());
    Response { status, headers, body }
}

fn js(status: u16, body: String) -> Response {
    let mut headers = HashMap::new();
    headers.insert("Content-Type".to_string(), "text/javascript; charset=utf-8".to_string());
    Response { status, headers, body }
}

Let's dig in on some of these route handlers.

Templating

I'm using the askama crate for templating. This provides compile-time-parsed templates. For me, this is great because:

  • Errors are caught at compile time
  • Less files need to be shipped to the deployed system

The downside is you have to go through a complete compile/link step before you can see your changes.

I'm happy to report that there were absolutely no issues using askama on this project. It compiled without any difference in the code for WASM.

I have just one HTML template, which I use for both the homepage and the /v1/show route. There is only one variable in the template: the encrypted secret value. In the case of the homepage, we use some default message. For /v1/show, we use the value provided by the query string. Let's look at the entirety of the homepage logic:

#[derive(Template)]
#[template(path = "homepage.html")]
struct Homepage {
    secret: String,
}

fn make_homepage(keypair: &Keypair) -> Result<String, Box<dyn std::error::Error>> {
    Ok(Homepage {
        secret: keypair.encrypt("The secret message has now been decrypted, congratulations!")?,
    }.render()?)
}

Virtually all of the work is handled for us by askama itself. I defined a struct, added a few attributes, and then called render() on the value. Easy! I won't bore you with the details of the HTML here, but if you want, feel free to check out homepage.html on Github.

The story for the script.js is similar, except it takes the Recaptcha site key as a variable.

#[derive(Template)]
#[template(path = "script.js", escape = "none")]
struct Script<'a> {
    site: &'a str,
}

pub(crate) fn script_js() -> Result<String, askama::Error> {
    Script {
        site: super::secrets::RECAPTCHA_SITE,
    }.render()
}

Cryptography

When I originally wrote Sorta Secret using actix-web, I used the sodiumoxide crate to access the sealedbox approach within libsodium. This provides a public key-based method of encrypting a secret. Unfortunately, sodiumoxide didn't compile trivially with WASM, which isn't surprising given that it's a binding to a C library. It may have been possible to brute force my way through this, but I decided to take a different approach.

Instead, I moved over to the pure-Rust cryptoxide crate. It doesn't provide the same high-level APIs of sodiumoxide, but it does provide chacha20poly1305, which is more than enough to implement symmetric key encryption.

This meant I also needed to generate some random values to create nonces, which was my first debugging nightmare. I used the getrandom crate to generate the random values, and initially added the dependency as:

getrandom = "0.1.13"

I naively assumed that it would automatically turn on the correct set of features to use WASM-relevant random data sources. Unfortunately, that wasn't the case. Instead, the calls to getrandom would simply panic about an unsupported backend. And while Cloudflare's preview system overall gives a great experience with error messages, the combination of a panic and a Promise meant that the exception was lost. By temporarily turning off the async bits and some other hacky workarounds, I eventually found out what the problem was, and eventually fixed it all by replacing the above line with:

getrandom = { version = "0.1.13", features = ["wasm-bindgen"] }

If you're curious, you can check out the encrypt and decrypt methods on Github. One pleasant finding was that, once I got the code compiling, all of the tests passed the first time, which is always an experience I strive for in strongly typed languages.

Parsing query strings

Both the /v1/encrypt and /v1/show endpoints take a single query string parameter, secret. In the case of encrypt, this is a plaintext value. In the case of show, it's the encrypted ciphertext. However, they both parse initially to a String, so I used the same (poorly named) struct to handle parsing both of them. If you remember from before, I already parsed the requested URL into a url::Url value. Using serde_urlencoded makes it easy to throw all of this together:

#[derive(Deserialize, Debug)]
struct EncryptRequest {
    secret: String,
}

impl EncryptRequest {
    fn from_url(url: &url::Url) -> Option<Self> {
        serde_urlencoded::from_str(url.query()?).ok()
    }
}

Using this from the encrypt endpoint looks like this:

pub(crate) fn encrypt(url: &url::Url) -> Result<(u16, String), Box<dyn std::error::Error>> {
    match EncryptRequest::from_url(url) {
        Some(encreq) => {
            let keypair = make_keypair()?;
            let encrypted = keypair.encrypt(&encreq.secret)?;
            Ok((200, encrypted))
        }
        None => Ok((400, "Invalid parameters".into())),
    }
}

Feel free to check out the show_html endpoint too.

Parsing JSON request body

On the homepage and /v1/show page, we load up the script.js file to talk to the Recaptcha servers, get a token, and then send the encrypted secrets and that token to the /v1/decrypt endpoint. This data is sent in a PUT request with a JSON request body. We call this a DecryptRequest, and once again we can use serde to handle all of the parsing:

#[derive(Deserialize)]
struct DecryptRequest {
    token: String,
    secrets: Vec<String>,
}

pub(crate) async fn decrypt(body: &str) -> (u16, String) {
    let decreq: DecryptRequest = match serde_json::from_str(body) {
        Ok(x) => x,
        Err(_) => return (400, "Invalid request".to_string()),
    };

    ...
}

At the beginning of this post, I mentioned the possibility of using the original JavaScript Request value instead of creating a simplified JSON representation of it. If we did so, we could call out to the json method instead. As it stands now, converting the request body to a String and parsing with serde works just fine.

I haven't looked into them myself, but there are certainly performance and code size trade-offs to be considered around this for deciding what the best solution here would be.

Outgoing HTTP requests

The final major hurdle was making the outgoing HTTP request to the Recaptcha server. When I did my Hyper implementation of Sorta Secret, I used the surf crate, which seemed at first to have WASM support. Unfortunately, I ended up running into two major (and difficult to debug) issues trying to use Surf for the WASM part of this:

  • The Surf code assumes that there will be a Window, and panics if there isn't. Within Cloudflare, there isn't a Window available. Instead, I had to use a ServiceWorkerGlobalScope. Debugging this was again tricky because of the dropped error messages. But I eventually fixed this by tweaking the Surf codebase with a function like:

    pub fn worker_global_scope() -> Option<web_sys::ServiceWorkerGlobalScope> {
        js_sys::global().dyn_into::<web_sys::ServiceWorkerGlobalScope>().ok()
    }
    
  • However, once I did this, I kept getting 400 invalid parameter responses from the Recaptcha servers. I eventually spun up a local server to dump all request information, used ngrok to make that service available to Cloudflare, and pointed the code at that ngrok hostname. I found out that it wasn't sending any request body at all.

I dug through the codebase a bit, and eventually found issue #26, which demonstrated that body uploads weren't supported yet. I considered trying to patch the library to add that support, but after a few initial attempts it looks like that will require some deeper modifications than I was ready to attempt.

So instead, I decided to go the opposite direction, and directly call the fetch API myself via the web-sys crate. This involves these logic steps:

  • Create a RequestInit value
  • Fill it with the appropriate request method and form data
  • Create a Request from that RequestInit and the Recaptcha URL
  • Get the global ServiceWorkerGlobalScope
  • Call fetch on it
  • Convert some Promises into Futures and .await them
  • Use serde to convert the JsValue containing the JSON response body into a VerifyResponse

Got that? Great! Putting all of that together looks like this:

use web_sys::{Request, RequestInit, Response};
let mut opts = RequestInit::new();
opts.method("POST");
let form_data = web_sys::FormData::new()?; // web-sys should really require mut here...
form_data.append_with_str("secret", body.secret)?;
form_data.append_with_str("response", &body.response)?;
opts.body(Some(&form_data));
let request = Request::new_with_str_and_init(
    "https://www.google.com/recaptcha/api/siteverify",
    &opts,
)?;

request.headers().set("User-Agent", "sortasecret")?;

let global = worker_global_scope().ok_or(VerifyError::NoGlobal)?;
let resp_value = JsFuture::from(global.fetch_with_request(&request)).await?;
let resp: Response = resp_value.dyn_into()?;
let json = JsFuture::from(resp.json()?).await?;
let verres: VerifyResponse = json.into_serde()?;

Ok(verres)

And with that, all was well!

Surprises

I've called out a few of these above, but let me collect some of my surprise points while implementing this.

  • The lack of error messages during the panic and async combo was a real killer. Maybe there's a way to improve that situation that I haven't figured out yet.
  • I was pretty surprised that getrandom would panic without the correct feature set.
  • I was also surprised that Surf silently dropped all form data, and implicitly expected a Window context that wasn't there.

On the Cloudflare side itself, the only real hurdles I hit were when it came to deploying to my own domain name instead of a workers.dev domain. The biggest gotcha was that I needed to fill in a dummy A record. I eventually found an explanation here. I got more confused during the debugging of this due to DNS propagation issues, but that's entirely my own fault.

Also, I shot myself in the foot with the route syntax in the wrangler.toml. I had initially put www.sortasecret.com, which meant it used workers to handle the homepage, but passed off requests for all other paths to my original actix-web service. I changed my route to be:

route = "*sortasecret.com/*"

I don't really blame Cloudflare docs for that, it's pretty well spelled out, but I did overlook it.

Once all of that was in place, it's wonderful to have access to the full suite of domain management tools for Cloudflare, such as HTTP to HTTPS redirection, and the ability to set virtual CNAMEs on the bare domain name. This made it trivial to set up my redirect from sortasecret.com to www.sortasecret.com.

Conclusion

I figured this rewrite would be a long one, and it was. I was unfamiliar with basically all of the technologies I ended up using: wasm-bindgen, Cloudflare Workers, and web-sys. Given all that, I'm not disappointed with the time investment.

If I was going to do this again, I'd probably factor out a significant number of common components to a cloudflare crate I could use, and provide things like:

  • More fully powered Request and Response types
  • A wrapper function to promote a async fn (Request) -> Result<Response, Box<dyn Error>> into something that can be exported by wasm-bindgen
  • Helper functions for the fetch API
  • Possibly wrap some of the other JavaScript and WASM APIs around things like JSON and crypto (though cryptoxide worked great for me)

With those tools in place, I would definitely consider using Cloudflare Workers like this again. The cost and maintenance benefits are great, the performance promises to be great, and I get to keep the safety guarantees I love about Rust.

Are others using Cloudflare Workers with Rust? Interested in it? Please let me know on Twitter.

And if your company is considering options in the DevOps, serverless, or Rust space, please consider reaching out to our team to find out how we can help you.

Free engineering consultation with FP Complete

Read more from out blog | Rust | DevOps

December 19, 2019 03:18 PM

Yesod Web Framework

A new Yesod book in Portuguese

Yesod users,

In order to help to spread the Yesod word here in Brazil, we, Alexandre Garcia de Oliveira, Patrick Augusto da Silva (@ptkato_ on Twitter), and Felipe Cannarozzo Lourenço, wrote a book, recently released, about Yesod called "Yesod e Haskell: Aplicações web com Programação Funcional pura" ("Yesod and Haskell: Web applications with pure Functional Programming", in English).

The book covers the principles of developing a web application with Yesod. Needless to say, it follows the same model of Alexandre's first book on Haskell, giving a much needed insight into the Yesod world within the Brazilian borders.

The book aims to allow the reader to learn Yesod from scratch, starting from the basics, like setting up the environment using stack, and using a monolithic example in a single file, to explain the foundations of the framework. The book goes through the Stackage's snapshots, the difference between templates and scaffolding, Shakespearean Templates, type-safe routing, persistent basics, authentication & authorization, finishing up with a RESTful app example.

The idea of writing a book about Yesod came to fruition during one of Alexandre's lectures on Yesod at FATEC-Santos, when it was realized that many students didn't read English at all and Yesod documentation in Portuguese was virtually non-existent. Not only that, the volition to write this book became even stronger when the TAs' (Patrick and Felipe) tutoring classes were almost always spent on going through the same topics, when Portuguese documentation could have easily solved the problem.

All the pedagogic skills to write this book were founded on top of the fact that it was written in an academic medium, considering that, it was shaped in such a way that fits the needs of an undergrad student fairly well. Now, hundreds of FATEC-Santos alumni experienced the joy of using Yesod, Santos indeed is the city in Brazil with the highest number of people who knows Yesod. We're committed to pushing Yesod forward through the local market, and maybe in the future, even beyond.

The book was published by "Casa do Código" and can be found here.

December 19, 2019 03:29 AM

Tweag I/O

Haskell art in your browser with Asterius

Sylvain Henry (IOHK), Shao Cheng

Asterius is an experimental GHC backend targeting WebAssembly, which makes it possible to run Haskell code in your browser or in a Node.js web service. Asterius has reached a new milestone: it can now compile the popular diagrams library for drawing with Haskell.

In recent months, Asterius has become a collaborative project with fixes and bug reports from the community, and major contributions from IOHK in addition to Tweag I/O.

In this post, we'll demonstrate how to run diagrams examples in the browser. This is the culmination of a lot of groundwork, from better Cabal support to implementing green threads and many basic concurrency primitives. More on that later in the post.

Hilbert in your browser

Our example is about generating and displaying SVG directly in the browser using diagrams. We picked the Hilbert curve example from diagrams's gallery. To use it with Asterius, we just have to adapt code provided in the gallery example as follows.

Let's start with the imports:

import Asterius.Types
import Diagrams.Backend.SVG
import Diagrams.Prelude

We then define hilbert and example, exactly as in the original:

hilbert :: Int -> Trail V2 Double
hilbert 0 = mempty
hilbert n =
  hilbert' (n - 1)
    # reflectY
    <> vrule 1
    <> hilbert (n - 1)
    <> hrule 1
    <> hilbert (n - 1)
    <> vrule (-1)
    <> hilbert' (n - 1)
    # reflectX
  where
    hilbert' m = hilbert m # rotateBy (1 / 4)

example :: Diagram B
example = frame 1 . lw medium . lc darkred . strokeT $ hilbert 5

Next up is showSVG, an embedded fragment of JavaScript code that will be executed in the browser. It's an immediately invoked function expression that appends a div element with the given contents to the page body.

foreign import javascript
   "(() => {                                    \
   \   const d = document.createElement('div'); \
   \   d.innerHTML = ${1};                      \
   \   document.body.appendChild(d);            \
   \ })()"
   showSVG :: JSString -> IO ()

Finally, main uses standard diagrams code to generate an SVG file as a String, then calls showSVG to display the element in the browser.

main :: IO ()
main = do
  let opts = SVGOptions
        { _size = dims2D 400 400,
          _svgDefinitions = Nothing,
          _idPrefix = mempty,
          _svgAttributes = [],
          _generateDoctype = False
        }
      svg = renderDia SVG opts example
  showSVG (toJSString (show svg))

To compile and test this program, we turn it into a package with the help of a Cabal file. Here is the contents of the Hilbert.cabal file:

cabal-version: 1.24

name:           Hilbert
version:        0.0.1
license:        BSD3
build-type:     Simple

executable Hilbert
  main-is: Hilbert.hs
  ghc-options: -Wall
  build-depends:
        base
      , text
      , diagrams
      , diagrams-svg
      , diagrams-lib
      , asterius-prelude
      , svg-builder
      , lucid-svg
  default-language: Haskell2010

As usual, the quickest way to get started with Asterius is to use our Docker image:

$ docker run -it --rm -v $(pwd):/mirror -w /mirror terrorjack/asterius
asterius@hostname:/mirror$

This command pulls the latest tag of our Docker image, maps the current working directory as a shared volume at /mirror, making it the working directory of the new container, and then enters a bash session.

To build the Hilbert project, proceed as follows:

asterius@hostname:/mirror$ ahc-cabal new-update
# Short update time
asterius@hostname:/mirror$ ahc-cabal new-install . --symlink-bindir .
# Longer build time

ahc-cabal is a wrapper around the cabal executable, which supports almost all cabal commands, including the legacy v1 build commands and the nix-style new build commands. Here we use new-install to build the Hilbert "executable" along with all its dependencies, with each component installed into the nix-style cabal store. After the build finishes, a Hilbert symbolic link will appear in /mirror, which points to the Hilbert "executable" we've just built.

Finally, we need to extract the WebAssembly & JavaScript artifacts from the Hilbert file. In an earlier post, we used the ahc-link wrapper to that effect, but ahc-link generates wasm and mjs files from individual .hs files. Cabal, in contrast, outputs a single executable file. So we need to use another wrapper, ahc-dist, which generates wasm and mjs files from such an executable. Except for the input, ahc-link and ahc-dist flags are the same:

asterius@hostname:/mirror$ ahc-dist --browser --input-exe Hilbert
[INFO] Converting linked IR to binaryen IR
[INFO] Running binaryen optimization
[INFO] Validating binaryen IR
[INFO] Writing WebAssembly binary to "./Hilbert.wasm"
[INFO] Writing JavaScript runtime modules to "."
[INFO] Writing JavaScript loader module to "./Hilbert.wasm.mjs"
[INFO] Writing JavaScript req module to "./Hilbert.req.mjs"
[INFO] Writing JavaScript entry module to "./Hilbert.mjs"
[INFO] Writing HTML to "./Hilbert.html"

The --browser flag indicates that we are targeting the browser instead of Node.js. It generates the .wasm and .mjs files along with an .html file which loads and runs the program. Outside the Docker container, we can use a static web server to serve the artifacts and load Hilbert.html into a browser tab. We recommend warp from the wai-app-static package:

$ warp -v
Serving directory [...] on port 3000 with ["index.html","index.htm"] index files.

$ firefox "localhost:3000/Hilbert.html"

Your browser should display the following image:

image

And here is a precompiled version you can try right now in your browser. (Due to an open issue, this example cannot currently be used in Safari.)

A taste of how we got here

To support the example above and many others, we improved Asterius along a number of dimensions over the last few months, each of which we aim to cover in its own blog post in the near future:

  • Template Haskell support: We now have partial TH support, which is enough to compile most packages. Splices are compiled to WebAssembly and executed in node, using pipes to communicate with the host ahc process, similar to the iserv remote interpreter of standard GHCi. The lack of TH support has been a major roadblock for Asterius as well as some other Haskell-to-Web solutions like haste, since many packages use TH, either via splices or annotations (e.g. the HLINT annotations), some of which are quite common in the dependency graphs of typical Haskell projects.
  • Concurrent runtime: The Asterius runtime is now concurrent, with support for green threads. It supports preemptive scheduling of several threads, timers (threadDelay), MVars and more.
  • ahc-cabal: A lot more packages can be built with ahc-cabal. While ahc-link is still convenient for testing single-file Main programs, Asterius users can now structure their code as regular Cabal projects, and pull dependencies from Hackage.
  • Docker image with prebuilt Stackage packages: To save time for users to set up a local Asterius installation and compile common dependencies, our prebuilt Docker image now also ships with around 2k prebuilt packages from a recent Stackage LTS snapshot. Due to factors like missing cbits, some of them won't work yet (e.g. cryptonite), but the pure Haskell packages like diagrams should work fine.
  • Cabal custom setup support: A lot of packages use custom Setup.hs files to jailbreak the Cabal build system and practice all forms of dark arts. We now have partial support for custom setup which suffices to compile packages like lens.
  • Improved runtime performance: A great advantage of having examples running in the browser is that we can use the browser-integrated devtools to spot performance problems or dig into runtime errors. For example, it helped us detect a problem where programs were spending much more time in the collector rather than the mutator. We fixed the issue and the garbage collection overhead is now much more acceptable.

December 19, 2019 12:00 AM

December 18, 2019

Brent Yorgey

Counting inversions via rank queries

In a post from about a year ago, I explained an algorithm for counting the number of inversions of a sequence in O(n \lg n) time. As a reminder, given a sequence a_1, a_2, \dots, a_n, an inversion is a pair of positions i, j such that a_i and a_j are in the “wrong order”, that is, i < j but a_i > a_j. There can be up to n(n-1)/2 inversions in the worst case, so we cannot hope to count them in faster than quadratic time by simply incrementing a counter. In my previous post, I explained one way to count inversions in O(n \lg n) time, using a variant of merge sort.

I recently learned of an entirely different algorithm for achieving the same result. (In fact, I learned of it when I gave this problem on an exam and a student came up with an unexpected solution!) This solution does not use a divide-and-conquer approach at all, but hinges on a clever data structure.

Suppose we have a bag of values (i.e. a collection where duplicates are allowed) on which we can perform the following two operations:

  1. Insert a new value into the bag.
  2. Count how many values in the bag are strictly greater than a given value.

We’ll call the second operation a rank query because it really amounts to finding the rank or index of a given value in the bag—how many values are greater than it (and thus how many are less than or equal to it)?

If we can do these two operations in logarithmic time (i.e. logarithmic in the number of values in the bag), then we can count inversions in O(n \lg n) time. Can you see how before reading on? You might also like to think about how we could actually implement a data structure that supports these operations.

Counting inversions with bags and rank queries

So, let’s see how to use a bag with logarithmic insertion and rank queries to count inversions. Start with an empty bag. For each element in the sequence, see how many things in the bag are strictly greater than it, and add this count to a running total; then insert the element into the bag, and repeat with the next element. That is, for each element we compute the number of inversions of which it is the right end, by counting how many elements that came before it (and are hence in the bag already) are strictly greater than it. It’s easy to see that this will count every inversion exactly once. It’s also easy to see that it will take O(n \lg n) time: for each of the n elements, we do two O(\lg n) operations (one rank query and one insertion).

In fact, we can do a lot more with this data structure than just count inversions; it sometimes comes in handy for competitive programming problems. More in a future post, perhaps!

So how do we implement this magical data structure? First of all, we can use a balanced binary search tree to store the values in the bag; clearly this will allow us to insert in logarithmic time. However, a plain binary search tree wouldn’t allow us to quickly count the number of values strictly greater than a given query value. The trick is to augment the tree so that each node also caches the size of the subtree rooted at that node, being careful to maintain these counts while inserting and balancing.

Augmented red-black trees in Haskell

Let’s see some code! In Haskell, probably the easiest type of balanced BST to implement is a red-black tree. (If I were implementing this in an imperative language I might use splay trees instead, but they are super annoying to implement in Haskell. (At least as far as I know. I will definitely take you out for a social beverage of your choice if you can show me an elegant Haskell implementation of splay trees! This is cool but somehow feels too complex.)) However, this isn’t going to be some fancy, type-indexed, correct-by-construction implementation of red-black trees, although that is certainly fun. I am actually going to implement left-leaning red-black trees, mostly following Sedgewick; see those slides for more explanation and proof. This is one of the simplest ways I know to implement red-black trees (though it’s not necessarily the most efficient).

First, a red-black tree is either empty, or a node with a color (which we imagine as the color of the incoming edge), a cached size, a value, and two subtrees.

> {-# LANGUAGE PatternSynonyms #-}
> 
> data Color = R | B
>   deriving Show
> 
> otherColor :: Color -> Color
> otherColor R = B
> otherColor B = R
> 
> data RBTree a
>   = Empty
>   | Node Color Int (RBTree a) a (RBTree a)
>   deriving Show

To make some of the tree manipulation code easier to read, we make some convenient patterns for matching on the structure of a tree when we don’t care about the values or cached sizes: ANY matches any tree and its subtrees, while RED and BLACK only match on nodes of the appropriate color. We also make a function to extract the cached size of a subtree.

> pattern ANY   l r <- Node _ _ l _ r
> pattern RED   l r <- Node R _ l _ r
> pattern BLACK l r <- Node B _ l _ r
> 
> size :: RBTree a -> Int
> size Empty            = 0
> size (Node _ n _ _ _) = n

The next thing to implement is the workhorse of most balanced binary tree implementations: rotations. The fiddliest bit here is managing the cached sizes appropriately. When rotating, the size of the root node remains unchanged, but the new child node, as compared to the original, has lost one subtree and gained another. Note also that we will only ever rotate around red edges, so we pattern-match on the color as a sanity check, although this is not strictly necessary. The error cases below should never happen.

> rotateL :: RBTree a -> RBTree a
> rotateL (Node c n t1 x (Node R m t2 y t3))
>   = Node c n (Node R (m + size t1 - size t3) t1 x t2) y t3
> rotateL _ = error "rotateL on non-rotatable tree!"
> 
> rotateR :: RBTree a -> RBTree a
> rotateR (Node c n (Node R m t1 x t2) y t3)
>   = Node c n t1 x (Node R (m - size t1 + size t3) t2 y t3)
> rotateR _ = error "rotateR on non-rotatable tree!"

To recolor a node, we just flip its color. We can then split a tree with two red subtrees by recoloring all three nodes. (The “split” terminology comes from the isomorphism between red-black trees and 2-3-4 trees; red edges can be thought of as “gluing” nodes together into a larger node, and this recoloring operation corresponds to splitting a 4-node into three 2-nodes.)

> recolor :: RBTree a -> RBTree a
> recolor Empty            = Empty
> recolor (Node c n l x r) = Node (otherColor c) n l x r
> 
> split :: RBTree a -> RBTree a
> split (Node c n l@(RED _ _) x r@(RED _ _))
>   = (Node (otherColor c) n (recolor l) x (recolor r))
> split _ = error "split on non-splittable tree!"

Finally, we implement a function to “fix up” the invariants by doing rotations as necessary: if we have two red subtrees we don’t touch them; if we have only one right red subtree we rotate it to the left (this is where the name “left-leaning” comes from), and if we have a left red child which itself has a left red child, we rotate right. (This function probably seems quite mysterious on its own; see Sedgewick for some nice pictures which explain it very well!)

> fixup :: RBTree a -> RBTree a
> fixup t@(ANY (RED _ _) (RED _ _)) = t
> fixup t@(ANY _         (RED _ _)) = rotateL t
> fixup t@(ANY (RED (RED _ _) _) _) = rotateR t
> fixup t = t

We can finally implement insertion. First, to insert into an empty tree, we create a red node with size 1.

> insert :: Ord a => a -> RBTree a -> RBTree a
> insert a Empty = Node R 1 Empty a Empty

If we encounter a node with two red children, we perform a split before continuing. This may violate the red-black invariants above us, but we will fix it up later on our way back up the tree.

> insert a t@(ANY (RED _ _) (RED _ _)) = insert a (split t)

Otherwise, we compare the element to be inserted with the root, insert on the left or right as appropriate, increment the cached size, and fixup the result. Notice that we don’t stop recursing upon encountering a value that is equal to the value to be inserted, because our goal is to implement a bag rather than a set. Here I have chosen to put values equal to the root in the left subtree, but it really doesn’t matter.

> insert a (Node c n l x r)
>   | a <= x    = fixup (Node c (n+1) (insert a l) x r)
>   | otherwise = fixup (Node c (n+1) l x (insert a r))

Implementing rank queries

Now, thanks to the cached sizes, we can count the values greater than a query value.

> numGT :: Ord a => RBTree a -> a -> Int

The empty tree contains 0 values strictly greater than anything.

> numGT Empty _ = 0

For a non-empty tree, we distinguish two cases:

> numGT (Node _ n l x r) q

If the query value q is less than the root, then we know that the root along with everything in the right subtree is strictly greater than q, so we can just add 1 + size r without recursing into the right subtree. We also recurse into the left subtree to count any values greater than q it contains.

>   | q < x     = numGT l q + 1 + size r

Otherwise, if q is greater than or equal to the root, any values strictly greater than q must be in the right subtree, so we recurse to count them.

>   | otherwise = numGT r q

By inspection we can see that numGT calls itself at most once, moving one level down the tree with each recursive call, so it makes a logarithmic number of calls, with only a constant amount of work at each call—thanks to the fact that size takes only constant time to look up a cached value.

Counting inversions

Finally, we can put together the pieces to count inversions. The code is quite simple: recurse through the list with an accumulating red-black tree, doing a rank query on each value, and sum the results.

> inversions :: Ord a => [a] -> Int
> inversions = go Empty
>   where
>     go _ []     = 0
>     go t (a:as) = numGT t a + go (insert a t) as

Let’s try it out!

λ> inversions [3,5,1,4,2]
6
λ> inversions [2,2,2,2,2,1]
5
λ> :set +s
λ> inversions [3000, 2999 .. 1]
4498500
(0.19 secs, 96,898,384 bytes)

It seems to work, and is reasonably fast!

Exercises

  1. Further augment each node with a counter representing the number of copies of the given value which are contained in the bag, and maintain the invariant that each distinct value occurs in only a single node.

  2. Rewrite inversions without a recursive helper function, using a scan, a zip, and a fold.

  3. It should be possible to implement bags with rank queries using fingertrees instead of building our own custom balanced tree type (though it seems kind of overkill).

  4. My intuition tells me that it is not possible to count inversions faster than n \lg n. Prove it.

by Brent at December 18, 2019 12:48 PM

Chris Penner

Algebraic lenses

Algebraic lenses

This is a blog post about optics, if you're at all interested in optics I suggest you go check out my book: Optics By Example. It covers everything you need to go from a beginner to master in all things optics! Check it out and tell your friends, now onwards to the post you're here for.

In this post we're going to dig into an exciting new type of optics, the theory of which is described in this abstract by Mario Román, Bryce Clarke, Derek Elkins, Jeremy Gibbons, Bartosz Milewski, Fosco Loregian, and Emily Pillmore. Thanks go out to these awesome folk for researching optics at a high level! The more that we realize the Category Theory representations of optics the more we can convince ourselves that they're a true and beautiful abstraction rather than just a useful tool we stumbled across.

I'm not really a "Mathy" sort of guy, I did very little formal math in university, and while I've become comfortable in some of the absolute basics of Category Theory through my travels in Haskell, I certainly wouldn't consider myself well-versed. I AM however well versed in the practical uses of optics, and so of course I need to keep myself up to speed on new developments, so when this abstract became available I set to work trying to understand it!

Most of the symbols and Category Theory went straight over my head, but I managed to pick out a few bits and pieces that we'll look at today. I'll be translating what little I understand into a language which I DO understand: Haskell!

If the above wasn't enough of a disclaimer I'll repeat: I don't really understand most of the math behind this stuff, so it's very possible I've made a few (or a lot) of errors though to be honest I think the result I've come to is interesting on its own, even if not a perfect representation of the ideas in the abstract. Please correct me if you know better :)

There are several new types of optics presented in the paper, we'll start by looking at one of them in particular, but will set the groundwork for the others which I'll hopefully get to in future posts. Today we'll be looking at "Algebraic lenses"!

Translating from Math

We'll start by taking a look at the formal characterization of algebraic lenses presented in the abstract. By the characterization of an optic I mean a set of values which completely describe the behaviour of that optic. For instance a Lens s t a b is characterized by a getter and a setter: (s -> a, s -> b -> t) and an Iso s t a b is characterized by its to and from functions: (s -> a, b -> t).

The paper presents the characterization of an algebraic lens like this: (my apologies for lack of proper LaTeX on my blog 😬)

  • Algebraic Lens: (S → A) × (ψS × B → T)

My blog has kind of butchered the formatting, so feel free to check it out in the abstract instead.

I'm not hip to all these crazy symbols, but as best as I can tell, we can translate it roughly like this:

  • Algebraic Lens: (s -> a, f s -> b -> t)

If you squint a bit, this looks really close to the characterization of a standard lens, the only difference being that instead of a single s we have some container f filled with them. The type of container further specifies what type of algebraic lens we're dealing with. For instance, the paper calls it a List Lens if f is chosen to be a list [], but we can really define optics for nearly any choice of f, though Traversable and Foldable types are a safe bet to start.

So, what can we actually do with this characterization? Well for starters it implies we can pass it more than one s at once, which is already different than a normal lens, but we can also use all of those s's alongside the result of the continuation (i.e. b) to choose our return value t. That probably sounds pretty overly generalized, and that's because it is! We're dealing with a mathematical definition, so it's intentionally as general as possible.

To put it into slightly more concrete terms, an Algebraic lens allows us to run some aggregation over a collection of substates of our input, then use the result of the aggregation to pick some result to return.

The example given in the paper (which we'll implement soon) uses an algebraic lens to do classification of flower measurements into particular species. It uses the "projection" function from the characterization (e.g. s -> a) to select the measurements from a Flower, and the "selection" function (f s -> b -> t) to take a list of Flowers, and a reference set of measurements, to classify those measurements into a species, returning a flower with the selected measurements and species.

We'll learn more about that as we implement it!

First guesses at an implementation

In the abstract we're given the prose for what the provided examples are intended to do, unfortunately we're only given a few very small code snippets without any source code or even type-signatures to help us out, so I'll mostly be guessing my way through this. As far as I can tell the paper is more concerned with proving the math first, since an implementation must exist if the math works out right? Let's see if we can take on the role of applied mathematician and get some code we can actually run 😃. I'll need to take a few creative liberties to get everything wired together.

Here are the examples given in the abstract:

-- Assume 'iris' is a data-set (e.g. list) of flower objects
>>> (iris !! 1) ^. measurements
(4.9 , 3.0 , 1.4 , 0.2)

>>> iris ?. measurements ( Measurements 4.8 3.1 1.5 0.1)
Iris Setosa (4.8 , 3.1 , 1.5 , 0.1)

>>> iris >- measurements . aggregateWith mean
Iris Versicolor (5.8, 3.0, 3.7, 1.1)

We're not provided with the implementation of ?., >-, Measurements, measurements, OR aggregateWith, nor do we have the data-set that builds up iris... Looks like we've got our work cut out for us here 😓

To start I'll make some assumptions to build up a dummy data-set of flowers to experiment with:

-- Some flower species
data Species = Setosa | Versicolor | Virginica
  deriving Show

-- Our measurements will just be a list of floats for now
data Measurements = Measurements {getMeasurements :: [Float]}
  deriving Show

-- A flower consists of a species and some measurements
data Flower = Flower { flowerSpecies :: Species
                     , flowerMeasurements :: Measurements}
  deriving Show

versicolor :: Flower
versicolor = Flower Versicolor (Measurements [2, 3, 4, 2])

setosa :: Flower
setosa = Flower Setosa (Measurements [5, 4, 3, 2.5])

flowers :: [Flower]
flowers = [versicolor, setosa]

That gives us something to fool around with at least, even if it's not exactly like the data-set used in the paper.

Now for the fun part, we need to figure out how we can somehow cram a classification algorithm into an optic! They loosely describe measurements as a list-lens which "encapsulates some learning algorithm which classifies measurements into a species", but the concrete programmatic definition of that will be up to my best judgement I suppose.

I'll be implementing these as Profunctor optics, they tend to work out a bit cleaner than the Van-Laarhoven approach, especially when working with "Grate-Like" optics which is where an algebraic-lens belongs. The sheer amount of guessing and filling in blanks I had to do means I stared at this for a good long while before I figured out a way to make this work. One of the tough parts is that the examples show the optic work for a single flower (like the (iris !! 1) ^. measurements example), but it somehow also runs a classifier over a list of flowers as in the iris ?. measurements ( Measurements 4.8 3.1 1.5 0.1) example. We need to find the minimal profunctor constraints which allow us to lift the characterization into an actual runnable optic!

I've been on a bit of a Corepresentable kick lately and it seemed like a good enough place to start. It also has the benefit of being easily translated into Van-Laarhoven optics if needed.

Here was my first crack at it:

import Data.Profunctor
import Data.Profunctor.Sieve
import Data.Profunctor.Rep
import Data.Foldable

type Optic p s t a b = p a b -> p s t

listLens :: forall p f s t a b
         . (Corepresentable p, Corep p ~ f, Foldable f)
         => (s -> a)
         -> ([s] -> b -> t)
         -> Optic p s t a b
listLens project flatten p = cotabulate run
  where
    run :: f s -> t
    run fs = flatten (toList fs) (cosieve p . fmap project $ fs)

This is a LOT to take in, let's address it in pieces.

First things first, a profunctor optic is simply a morphism over a profunctor, something like: p a b -> p s t.

Next, the Corepresentable constraint:

Corepresentable has Cosieve as a superclass, and so provides us with both of the following methods:

Cosieve p f       => cosieve    :: p a b -> f a -> b
Corepresentable p => cotabulate :: (Corep p d -> c) -> p d c

These two functions together allow us to round-trip our profunctor from p a b into some f a -> b and then back! In fact, this is the essence of what Corepresentable means, we can "represent" the profunctor as a function from a value in some context f to the result.

Profunctors in general can't simply be applied like functions can, these two functions allow us to reflect an opaque and mysterious generic profunctor into a real function that we can actually run! In our implementation we fmap project over the f s's to get f a, then run that through the provided continuation: f a -> b which we obtain by running cosieve on the profunctor argument, then we can flatten the whole thing using the user-provided classification-style function.

Don't worry if this doesn't make a ton of sense on its own, it took me a while to figure out. At the end of the day, we have a helper which allows us to write a list-lens which composes with any Corepresentable profunctor. This allows us to write our measurements classifier, but we'll need a few helper functions first.

First we'll write a helper to compute the Euclidean distance between two flowers' measurements (e.g. we'll compute the difference between each set of measurements, then sum the difference):

measurementDistance :: Measurements -> Measurements -> Float
measurementDistance (Measurements xs) (Measurements ys) =
    sqrt . sum $ zipWith diff xs ys
  where
    diff a b = (a - b) ** 2

This will tell us how similar two measurements are, the lower the result, the more similar they are.

Next we'll write a function which when given a reference set of flowers will detect the flower which is most similar to a given set of measurements. It will then build a flower by combining the closest species and the given measurements.

classify :: [Flower] -> Measurements -> Maybe Flower
classify flowers m
  | null flowers = Nothing
  | otherwise =
  let Flower species _ = minimumBy
                          (comparing (measurementDistance m . flowerMeasurements))
                          flowers
   in Just $ Flower species m

This function returns its result in Maybe, since we can't classify anything if we're given an empty data-set.

Now we have our pieces, we can build the measurements list-lens!

measurements :: (Corepresentable p, Corep p ~ f, Foldable f) 
             => Optic p Flower (Maybe Flower) Measurements Measurements
measurements = listLens flowerMeasurements classify

We specify that the container type used in the Corepresentable instance must be foldable so that we can convert it into a list to do our classification.

Okay! Now we have enough to try some things out! The first example given in the abstract is:

>>> (iris !! 1) ^. measurements

Which we'll translate into:

>>> (flowers !! 1) ^. measurements

But unfortunately we get an error:

• No instance for (Corepresentable
                      (Data.Profunctor.Types.Forget Measurements))
    arising from a use of ‘measurements’

By the way, all the examples in this post are implemented using my highly experimental Haskell profunctor optics implementation proton. Feel free to play with it, but don't use it in anything important.

Hrmm, looks like (^.) uses Forget for its profunctor and it doesn't have a Corepresentable instance! We'll come back to that soon, let's see if we can get anything else working first.

The next example is:

iris ?. measurements (Measurements 4.8 3.1 1.5 0.1)

I'll admit I don't understand how this example could possibly work, optics necessarily have the type p a b -> p s t, so how are they passing a Measurements object directly into the optic? Perhaps it has some other signature, but we know that's not true from the previous example which uses it directly as a lens! Hrmm, I strongly suspect that this is a typo, mistake, or most likely this example is actually just short-hand pseudocode of what an implementation might look like and we're discovering a few rough edges. Perhaps the writers of the paper thought of something sneaky that I missed. Without the source code for the example we'll never know, but since I can't see how this version could work, let's modify it into something close which I can figure out.

It appears as though (?.) is an action which runs the optic. Actions in profunctor optics tend to specialize the optic to a specific profunctor, then pass the other arguments through it using that profunctor as a carrier. We know we need a profunctor that's Corepresentable, and the simplest instance for that is definitely Costar! Here's what it looks like:

newtype Costar f a b = Costar (f a -> b)

Costar is basically the "free" Corepresentable, it's just a new-type wrapper around a function from values in a container to a result. You might also know it by the name Cokleisli, they're the same type, but Costar is the one we typically use with Profunctors.

If we swap the arguments in the example around a bit, we can write an action which runs the optic using Costar like this:

(?.) :: (Foldable f) => f s -> Optic (Costar f) s t a b -> b -> t
(?.) xs f a = (runCostar $ f (Costar (const a))) xs

The example seems to use a static value for the comparison, so I use const to embed that value into the Costar profunctor, then run that through the provided profunctor morphism (i.e. optic).

This lets us write the example like this instead:

>>> flowers ?. measurements $ Measurements [5, 4, 3, 1]
Just (Flower Setosa (Measurements [5.0,4.0,3.0,1.0]))

Which is really close to the original, we just added a $ to make it work.

>>> iris ?. measurements (Measurements 4.8 3.1 1.5 0.1)

Let's see if this is actually working properly. We're passing a "fixed" measurement in as our aggregation function, meaning we're comparing every flower in our list to these specific measurements and will find the flower that's "closest". We then build a flower using the species closest to those measurements alongside the provided measurements. To test that this is actually working properly, let's try again with measurements that match our versicolor flower more closely:

>>> setosa
Flower Setosa (Measurements [5.0,4.0,3.0,2.5])
>>> versicolor
Flower Versicolor (Measurements [2.0,3.0,4.0,2.0])

-- By choosing measurements close to the `versicolor` in our data-set
-- we expect the measurements to be classified as Versicolor
>>> flowers ?. measurements $ Measurements [1.9, 3.2, 3.8, 2]
Just (Flower Versicolor (Measurements [1.9,3.2,3.8,2.0]))

We can see that indeed it now switches the classification to Versicolor! It appears to be working!

Even though this version looks a lot like the example in the abstract, it doesn't quite feel in line with style of existing optics libraries so I'll flip the arguments around a bit further: (I'll rename the combinator to ?- to avoid confusion with the original)

(?-) :: (Foldable f) => f s -> Optic (Costar f) s t a b -> b -> t
(?-) xs f a = (runCostar $ f (Costar (const a))) xs

The behaviour is the same, but flipping the arguments allows it to fit the "feel" of other optics combinators better (IMHO), we use it like this:

>>> flowers & measurements ?- Measurements [5, 4, 3, 1]
Just (Flower Setosa (Measurements [5.0,4.0,3.0,2.5]))

We pass in the data-set, and "assign" our comparison value to be the single Measurement we're considering.

Making measurements a proper lens

Before moving on any further, let's see if we can fix up measurements so we can use (^.) on a single flower like the first example does. Remember, (^.) uses Forget as the concrete profunctor instead of Costar, so whatever we do, it has to have a valid instance for the Forget profunctor which looks like this:

newtype Forget r a b = Forget (a -> r)

As an exercise for the reader, try to implement Corepresentable for Forget (or even Cosieve) and you'll see it's not possible, so we'll need to find a new tactic. Perhaps there's some other weaker abstraction we can invent which works for our purposes.

The end-goal here is to create an optic out of the characterization of an algebraic lens, so what if we just encode that exact idea into a typeclass? It's so simple it just might work! Probably should have started here, sticking with the optics metaphor: hindsight is 20/20.

{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE FunctionalDependencies #-}

class Profunctor p => Algebraic f p | p -> f where
  algebraic :: (s -> a) -> (f s -> b -> t) -> p a b -> p s t

type AlgebraicLens f s t a b = forall p. Algebraic f p => p a b -> p s t
type AlgebraicLens' f s a = AlgebraicLens f s s a a

By keeping f general we can write list-lenses or any other type of algebraic lens. I added a functional dependency here to help with type-inference. This class represents exactly what we want an algebraic lens to do. It's entirely possible there's a more general profunctor class which has equivalent power, if I'm missing one please let me know!

Now that we have a typeclass we'll implement an instance for Costar so we can still use our (?.) and (?-) actions:

instance Functor f => Algebraic f (Costar f) where
  algebraic project flatten p = cotabulate run
    where
      run fs = flatten fs (cosieve (lmap project p) fs)

Technically this implementation works on any Corepresentable profunctor, not just Costar, so we could re-use this for a few other profunctors too!

Did we make any progress? We need to see if we can implement an instance of Algebraic for Forget, if we can manage that, then we can use view over our measurements optic just like the example does.

instance Algebraic Proxy (Forget r) where
  algebraic project _flatten (Forget f) = Forget (f . project)

Well that was pretty painless! This allows us to do what our Corepresentable requirement didn't.

I've arbitrarily chosen Proxy as the carrier type because it's empty and doesn't contain any values. The carrier itself isn't every used, but I needed to pick something and this seemed like a good a choice as any. Perhaps a higher-rank void type would be more appropriate, but we'll cross that bridge when we have to.

With that, we just need to re-implement our measurements optic using Algebraic:

measurements :: Foldable f 
             => AlgebraicLens f Flower (Maybe Flower) Measurements Measurements
measurements = algebraic flowerMeasurements classify

The name measurements is a bit of a misnomer, it does classification and selection, which is quite a bit more than just selecting the measurements! Perhaps a better name would be measurementsClassifier or something. I'll stick to the name used in the abstract for now.

Now we can view through our measurements optic directly! This fulfills the first example perfectly!

>>> (flowers !! 1) ^. measurements
Measurements [5.0,4.0,3.0,2.5]

Awesome! All that's left to have a proper lens is to be able to set as well. In profunctor optics, the set and modify actions simply use the (->) profunctor, so we'll need an instance for that. Technically (->) is isomorphic to Costar Identity, so we could use the exact same implementation we used for our Costar instance but there's a simpler implementation if we specialize. It turns out that Identity makes a good carrier type since it holds exactly one argument.

instance Algebraic Identity (->) where
  algebraic project flatten p = run
    where
      run s = flatten (Identity s) (p . project $ s)

Now we can modify or set measurements through our algebraic lens too:

>>> versicolor & measurements .~ Measurements [9, 8, 7, 6]
Flower Versicolor Measurements [9.0,8.0,7.0,6.0]

Since we can get and set, our algebraic lens is indeed a full-blown lens! This is surprisingly interesting interesting since we didn't make any use of Strong which is how most lenses are implemented, and in fact Costar isn't a Strong profunctor!

You might be curious how this actually works at all, behind the scenes the algebraic lens receives the new measurements as though it were the result of an aggregation, then uses those measurements with the Species of the single input flower (which of course hasn't changed), thus appearing to modify the flower's measurements! It's the "long way round" but it behaves exactly the same as a simpler lens would.

Here's one last interesting instance just for fun:

instance Algebraic Proxy Tagged where
  algebraic project flatten (Tagged b) = Tagged (flatten Proxy b)

Tagged is used for the review actions, which means we can try running our algebraic lens as a review:

>>> review measurements (Measurements [1, 2, 3, 4])
Nothing

I suppose that's what we can expect, we're effectively classifying measurements without any data-set, so our classify function 'fails' with it's Nothing value. It's very cool to know that we can (in general) run algebraic lenses in reverse like this!

Running custom aggregations

We have one more example left to look at:

>>> iris >- measurements . aggregateWith mean
Iris Versicolor (5.8 , 3.0 , 3.7 , 1.1)

In this example they compute the mean of each of the respective measurements across their whole data-set, then find the species of flower which best represents the "average flower" of the data-set.

In order to implement this we'd need to implement aggregateWith, which is a Kaleidoscope, and that's a whole different type of optic, so we'll continue this thread in a subsequent post but we can get most of the way there with what we've got already if we write a slightly smarter aggregation function.

To spoil kaleidoscopes just a little, aggregateWith allows running aggregations over lists of associated measurements. That is to say that it groups up each set of related measurements across all of the flowers, then takes the mean of each set of measurements (i.e. the mean all the first measurements, the mean of all the second measurements, etc.). If we don't mind the inconvenience, we can implement this exact same example by baking that logic into an aggregation function and thus avoid the need for a Kaleidoscope until the next blog post 😉

Right now our measurements function focuses the Measurements of a set of flowers, the only action we have right now ignores the data-set entirely and accepts a specific measurement as input, but we can easily modify it to take a custom aggregation function:

infixr 4 >-
(>-) :: Optic (Costar f) s t a b -> (f a -> b) -> f s -> t
(>-) opt aggregator xs = (runCostar $ opt (Costar aggregator)) xs

My version of the combinator re-arranges the arguments a bit (again) to make it read a bit more like %~ and friends. It takes an algebraic lens on the left and an aggregation function on the right. It'll run the custom aggregation and hand off the result to the algebraic lens.

This lets us write the above example like this:

>>> flowers & measurements >- avgMeasurement

But we'll need to define the avgMeasurement function first. It needs to take a Foldable container filled with measurements and compute the average value for each of the four measurements. If we're clever about it transpose can re-group all the measurements exactly how we want!

mean :: Fractional a => [a] -> a
mean [] =  0
mean xs = sum xs / fromIntegral (length xs)

avgMeasurement :: Foldable f => f Measurements -> Measurements
avgMeasurement ms = Measurements (mean <$> groupedMeasurements)
  where
    groupedMeasurements :: [[Float]]
    groupedMeasurements = transpose (getMeasurements <$> toList ms)

We manually pair all the associated elements, then construct a new set of measurements where each value is the average of that measurement across all the inputs.

Now we can finally find out what species the average flower is closest to!

>>> flowers & measurements >- avgMeasurement
Just (Flower Versicolor (Measurements [3.5,3.5,3.5,2.25]))

Looks like it's closest to the Versicolor species!

We can substitute avgMeasurement for any sort of aggregation function of type [Measurements] -> Measurements and this expression will run it on our data-set and return the species which is closest to those measurements. Pretty cool stuff!

Custom container types

We've stuck with a list so far since it's easy to think about, but algebraic lenses work over any container type so long as you can implement the aggregation functions you want on them. In this case we only require Foldable for our classifier, so we can hot-swap our list for a Map without any changes!

>>> M.fromList [(1.2, setosa), (0.6, versicolor)] 
      & measurements >- avgMeasurement
Just (Flower Versicolor (Measurements [3.5,3.5,3.5,2.25]))

This gives us the same answer of course since the foldable instance simply ignores the keys, but the container type is carried through any composition of algebraic lenses! That means our aggregation function now has type: Map Float Measurements -> Measurements, see how it still projects from Flower into Measurements even inside the map? Let's say we want to run a scaling factor over each of our measurements as part of aggregating them, we can bake it into the aggregation like this:

scaleBy :: Float -> Measurements -> Measurements
scaleBy w (Measurements m) = Measurements (fmap (*w) m)

>>> M.fromList [(1.2, setosa), (0.6, versicolor)] 
      & measurements >- avgMeasurement . fmap (uncurry scaleBy) . M.toList
Just (Flower Versicolor (Measurements [3.5,3.5,3.5,2.25]))

Running the aggregation with these scaling factors changed our result and shows us what the average flower would be if we scaled each flower by the amount provided in the input map.

This isn't a perfect example of what other containers could be used for, but I'm sure folks will be dreaming up clever ideas in no time!

Other aggregation types

Just as we can customize the container type and the aggregation function we pass in we can also build algebraic lenses from any manor of custom "classification" we want to perform. Let's write a new list-lens which partitions the input values based on the result of the aggregation. In essence classifying each point in our data-set as above or below the result of the aggregation.

partitioned :: forall f a. (Ord a, Foldable f) => AlgebraicLens f a ([a], [a]) a a
partitioned = algebraic id splitter
  where
    splitter :: f a -> a -> ([a], [a])
    splitter xs ref
      = (filter (< ref) (toList xs), filter (>= ref) (toList xs))

It's completely fine for our s and t to be completely disparate types like this.

This allows us to split a container of values into those which are less than the aggregation, or greater/equal to it. We can use it with a static value like this:

>>> [1..10] & partitioned ?- 5
([1,2,3,4],[5,6,7,8,9,10])

Or we can provide our own aggregation function; let's say we want to split it into values which are less than or greater than the mean of the data-set. We'll use our modified version of >- for this:

>>> mean [3, -2, 4, 1, 1.3]
1.46

>>> [3, -2, 4, 1, 1.3] & partitioned >- mean
([-2.0,1.0,1.3], [3.0,4.0])

Here's a list-lens which generalizes the idea behind minimumBy, maximumBy, etc. into an optic. We allow the user to provide a selection function for indicating the element they want, then the optic itself will pluck the appropriate element out of the collection.

-- Run an aggregation on the first elements of the tuples
-- Select the second tuple element which is paired with the value
-- equal to the aggregation result.
onFirst :: (Foldable f, Eq a) => AlgebraicLens f (a, b) (Maybe b) a a
onFirst = algebraic fst picker
  where
    picker xs a = lookup a $ toList xs

-- Get the character paired with the smallest number
>>> [(3, 'a'), (10, 'b'), (2, 'c')] & onFirst >- minimum
Just 'c'

-- Get the character paired with the largest number
>>> [(3, 'a'), (10, 'b'), (2, 'c')] & onFirst >- maximum
Just 'b'

-- Get the character paired with the first even number
>>> [(3, 'a'), (10, 'b'), (2, 'c')] & onFirst >- head . filter even
Just 'b'

If our structure is indexable we can do this much more generally and build a library of composable optics which dig deeply into structures and perform selection aggregations over anything we want. It may take a little work to figure out the cleanest set of combinators, but here's a simplified example of just how easy it is to start messing around with:

-- Pick some substate or projection from each value,
-- The aggregation selects the index of one of these projections and returns it
-- Return the 'original state' that lives at the chosen index
selectingOn :: (s -> a) -> AlgebraicLens [] s (Maybe s) a (Maybe Int)
selectingOn project = algebraic project picker
  where
    picker xs i = (xs !!) <$> i

-- Use the `Eq` class and return the index of the aggregation result in the original list
indexOf :: Eq s => AlgebraicLens [] s (Maybe Int) s s
indexOf = algebraic id (flip elemIndex)

-- Project each string into its length, 
-- then select the index of the string with length 11,
-- Then find and return the element at that index
>>> ["banana", "pomegranate", "watermelon"] 
      & selectingOn length . indexOf ?- 11
Just "pomegranate"

-- We can can still use a custom aggregation function,
-- This gets the string of the shortest length. 
-- Note we didn't need to change our chain of optics at all!
>>> ["banana", "pomegranate", "watermelon"] 
      & selectingOn length . indexOf >- minimum
Just "banana"

I'm sure you can already imagine all sorts of different applications for this sort of thing. It may seem more awkward than the straight-forward Haskell way of doing these things, but it's a brand new idea, it'll take time for the ecosystem to grow around it and for us to figure out the "best way".

Summarizing Algebraic Lenses

The examples we've looked at here are just a few of many possible ways we can use Algebraic lenses! Remember that we can generalize the f container into almost anything! We can use Maps, Lists, we could even use a function as the container! In addition we can use any sort of function in place of the classifier, there's no requirement that it has to return the same type as its input. Algebraic lenses allow us to compose lenses which focus on a specific portion of state, run a comparison or aggregation there (e.g. get the maximum or minimum element from the collection based on some property), then zoom back out and select the larger element which contains the minimum/maximum substate!

This means we can embed operations like minimumBy, findBy, elemIndex and friends as composable optics! There are many other interesting aggregations to be found in statistics, linear algebra, and normal day-to-day tasks. I'm very excited to see where this ends up going, there are a ton of possibilities which I haven't begun to think about yet.

Algebraic lenses also tend to compose better with Grate-like optics than traditional Strong Profunctor based lenses, they work well with getters and folds, and can be used with setters or traversals for setting or traversing (but not aggregating). They play a role in the ecosystem and are just one puzzle piece in the world of optics we're still discovering.

Thanks for reading! We'll dig into Kaleidoscopes soon, so stay tuned!

Updates & Edits

After releasing this some authors of the paper pointed out some helpful notes (thanks Bryce and Mario!)

It turns out that we can further generalize the Algebraic class further while maintaining its strength.

The suggested model for this is to specify profunctors which are Strong with respect to Monoids. To understand the meaning of this, let's take a look at the original Strong typeclass:

class Profunctor p => Strong p where
  first' :: p a b -> p (a, c) (b, c)
  second' :: p a b -> p (c, a) (c, b)

The idea is that a Strong profunctor can allow additional values to be passed through freely. We can restrict this idea slightly by requiring the value which we're passing through to be a Monoid:

class Profunctor p => MStrong p where
  mfirst' ::  Monoid m => p a b -> p (a, m) (b, m)
  mfirst' = dimap swap swap . msecond'
  msecond' ::  Monoid m => p a b -> p (m, a) (m, b)
  msecond' = dimap swap swap . mfirst'

  {-# MINIMAL mfirst' | msecond' #-}

This gives us more power when writing instances, we can "summon" a c from nowhere via mempty if needed, but can also combine multiple c's together via mappend if needed. Let's write all the needed instances of our new class:

instance MStrong (Forget r) where
  msecond' = second'

instance MStrong (->) where
  msecond' = second'

instance MStrong Tagged where
  msecond' (Tagged b) = Tagged (mempty, b)

instance Traversable f => MStrong (Costar f) where
  msecond' (Costar f) = Costar (go f)
    where
      go f fma = f <$> sequenceA fma

The first two instances simply rely on Strong, all Strong profunctors are trivially MStrong in this manner. To put it differently, MStrong is superclass of Strong (although this isn't reflected in libraries at the moment). I won't bother writing out all the other trivial instances, just know that all Strong profunctors have an instance.

Tagged and Costar are NOT Strong profunctors, but by taking advantage of the Monoid we can come up with suitable instances here! We use mempty to pull a value from thin air for Tagged, and Costar uses the Applicative instance of Monoid m => (m, a) to sequence its input into the right shape.

Indeed, this appears to be a more general construction, but at first glance it seems to be orthogonal; how can we regain our algebraic function using only the MStrong constraint?

import Control.Arrow ((&&&))

algebraic :: forall m p s t a b
           . (Monoid m,  MStrong p) 
           => (s -> m) 
           -> (s -> a) 
           -> (m -> b -> t) 
           -> Optic p s t a b
algebraic inject project flatten p
  = dimap (inject &&& id)  (uncurry flatten) $  strengthened
  where
    strengthened :: p (m, s) (m, b)
    strengthened = msecond' (lmap project p)

This is perhaps not the most elegant definition, but it matches the type without doing anything outright stupid, so I suppose it will do (type-hole driven development FTW)!

We require from the user a function which injects the state into a Monoid, then use MStrong to project that monoid through the profunctor's action. On the other side we use the result of the computation alongside the Monoidal summary of the input value(s) to compute the final aggregation.

We can recover our standard list-lens operations by simply choosing [s] to be our Monoid.

listLens :: MStrong p => (s -> a) -> ([s] -> b -> t) -> Optic p s t a b
listLens = algebraic pure

In fact, we can easily generalize over any Alternative container. Alternative's provide a Monoid over their Applicative structure, and we can use the Alt newtype wrapper from Data.Monoid to use an Alternative structure as a Monoid.

altLens :: (Alternative f, MStrong p) 
        => (s -> a) -> (f s -> b -> t) -> Optic p s t a b
altLens project flatten = malgebraic (Alt . pure)  project (flatten . getAlt)

So now we've got a fully general algebraic lens which allows aggregating over any monoidal projection of input, including helpers for doing this over Alternative structures, or lists in particular! This gives us a significant amount of flexibility and power.

I won't waste everyone's time by testing these new operations here, take heart that they do indeed work the same as the original definitions provided above.

Hopefully you learned something 🤞! Did you know I'm currently writing a book? It's all about Lenses and Optics! It takes you all the way from beginner to optics-wizard and it's currently in early access! Consider supporting it, and more posts like this one by pledging on my Patreon page! It takes quite a bit of work to put these things together, if I managed to teach your something or even just entertain you for a minute or two maybe send a few bucks my way for a coffee? Cheers! �

Become a Patron!

December 18, 2019 12:00 AM

December 16, 2019

FP Complete

Casa and Stack

This post is aimed at Haskellers who are roughly aware of how build infrastructure works for Haskell.

But the topic may have general audience outside of the Haskell community, so this post will briefly describe each part of the infrastructure from the bottom up: compiling modules, building and configuring packages, to downloading and storing those packages online.

This post is a semi-continuation from last week's post on Casa.

GHC

GHC is the de facto standard Haskell compiler. It knows how to load packages and compile files, and produce binary libraries and executables. It has a small database of installed packages, with a simple command-line interface for registering and querying them:

$ ghc-pkg register yourpackage
$ ghc-pkg list

Apart from that, it doesn't know anything else about how to build packages or where to get them.

Cabal

Cabal is the library which builds Haskell packages from a .cabal file package description, which consists of a name, version, package dependencies and build flags. To build a Haskell package, you create a file (typically Setup.hs), with contents roughly like:

import Distribution.Simple -- from the Cabal library
main = defaultMain

This (referred to as a "Simple" build), creates a program that you can run to configure, build and install your package.

$ ghc Setup.hs
$ ./Setup configure # Checks dependencies via ghc-pkg
$ ./Setup build # Compiles the modules with GHC
$ ./Setup install # Runs the register step via ghc-pkg

This file tends to be included in the source repository of your package. And modern package build tools tend to create this file automatically if it doesn't already exist. The reason the build system works like this is so that you can have custom build setups: you can make pre/post build hooks and things like that.

But the Cabal library doesn't download packages or manage projects consisting of multiple packages, etc.

Hackage

Hackage is an online archive of versioned package tarballs. Anyone can upload packages to this archive, where the package must have a version associated with it, so that you can later download a specific instance of the package that you want, e.g. text-1.2.4.0. Each package is restricted to a set of maintainers (such as the author) who is able to upload to it.

The Hackage admins and authors are able to revise the .cabal package description without publishing a new version, and regularly do. These new revisions supersede previous revisions of the cabal files, while the original revisions still remain available if specifically requested (if supported by tooling being used).

cabal-install

There is a program called cabal-install which is able to download packages from Hackage automatically and does some constraint solving to produce a build plan. A build plan is when the tool picks what versions of package dependencies your package needs to build.

It might look like:

  • base-4.12.0.0
  • bytestring-0.10.10.0
  • your-package-0.0

Version bounds (<2.1 and >1.3) are used by cabal-install as heuristics to do the solving. It isn't actually known whether any of these packages build together, or that the build plan will succeed. It's a best guess.

Finally, once it has a build plan, it uses both GHC and the Cabal library to build Haskell packages, by creating the aforementioned Setup.hs automatically if it doesn't already exist, and running the ./Setup configure, build, etc. step.

Stackage

As mentioned, the build plans produced by cabal-install are a best guess based on constraint solving of version bounds. There is a matrix of possible build plans, and the particular one you get may be entirely novel, that no one has ever tried before. Some call this "version hell".

To rectify this situation, Stackage is a "stable Hackage" service, which publishes known subsets of Hackage that are known to build and pass tests together, called snapshots. There are nightly snapshots published, and long-term snapshots called lts-1.0, lts-2.2, etc. which tend to steadily roll along with the GHC release cycle. These LTS releases are intended to be what people put in source control for their projects.

The Stackage initiative has been running since it was announced in 2012.

stack

The stack program was created to specifically make reproducible build plans based on Stackage. Authors include a stack.yaml file in their project root, which looks like this:

snapshot: lts-1.2
packages: [mypackage1, mypackage2]

This tells stack that:

  1. We want to use the lts-1.2 snapshot, therefore any package dependencies that we need for this project will come from there.
  2. That within this directory, there are two package directories that we want to build.

The snapshot also indicates which version of GHC is used to build that snapshot; so stack also automatically downloads, installs and manages the GHC version for the user. GHC releases tend to come out every 6 months to one year, depending on scheduling, so it's common to have several GHC versions installed on your machine at once. This is handled transparently out of the box with stack.

Additionally, we can add extra dependencies for when we have patched versions of upstream libraries, which happens a lot in the fast-moving world of Haskell:

snapshot: lts-1.2
packages: [mypackage1, mypackage2]
extra-deps: ["bifunctors-5.5.4"]

The build plan for Stack is easy: the snapshot is already a build plan. We just need to add our source packages and extra dependencies on top of the pristine build plan.

Finally, once it has a build plan, it uses both GHC and the Cabal library to build Haskell packages, by creating the aforementioned Setup.hs automatically if it doesn't already exist, and running the ./Setup configure, build, etc. step.

Pantry

Since new revisions of cabal files can be made available at any time, a package identifier like bifunctors-5.5.4 is not reproducible. Its meaning can change over time as new revisions become available. In order to get reproducible build plans, we have to track "revisions" such as bifunctors-5.5.4@rev:1.

Stack has a library called Pantry to store all of this package metadata into an sqlite database on the developer's machine. It does so in a content-addressable way (CAS), so that every variation on version and revision of a package has a unique SHA256 cryptographic hash summarising both the .cabal package description, and the complete contents of the package.

This lets Stackage be exactly precise. Stackage snapshots used to look like this:

packages:
- hackage: List-0.5.2
- hackage: ListLike-4.2.1
...

Now it looks like this:

packages:
- hackage: ALUT-2.4.0.3@sha256:ab8c2af4c13bc04c7f0f71433ca396664a4c01873f68180983718c8286d8ee05,4118
  pantry-tree:
    size: 1562
    sha256: c9968ebed74fd3956ec7fb67d68e23266b52f55b2d53745defeae20fbcba5579
- hackage: ANum-0.2.0.2@sha256:c28c0a9779ba6e7c68b5bf9e395ea886563889bfa2c38583c69dd10aa283822e,1075
  pantry-tree:
    size: 355
    sha256: ba7baa3fadf0a733517fd49c73116af23ccb2e243e08b3e09848dcc40de6bc90

So we're able to CAS identify the .cabal file by a hash and length,

ALUT-2.4.0.3@sha256:ab8c2af4c13bc04c7f0f71433ca396664a4c01873f68180983718c8286d8ee05,4118

And we're able to CAS identify the contents of the package:

pantry-tree:
  size: 355
  sha256: ba7baa3fadf0a733517fd49c73116af23ccb2e243e08b3e09848dcc40de6bc90

Additionally, each and every file within the package is CAS-stored. The "pantry-tree" refers to a list of CAS hash-len keys (which is also serialised to a binary blob and stored in the same CAS store as the files inside the tarball themselves). With every file stored, we remove a lot of duplication that we had storing a whole tarball for every single variation of a package.

Parenthetically, the 01-index.tar that Hackage serves up with all the latest .cabal files and revisions has to be downloaded every time. As this file is quite large this is slow and wasteful.

Another side point: Hackage Security is not needed or consulted for this. CAS already allows us to know in advance whether what we are receiving is correct or not, as stated elsewhere.

When switching to a newer snapshot, lots of packages will be updated, but within each package, only a few files will have changed. Therefore we only need to download those few files that are different. However, to achieve that, we need an online service capable of serving up those blobs by their SHA256...

Enter Casa

As announced in our casa post, Casa stands for "content-addressable storage archive", and also means "home" in romance languages, and it is an online service we're announcing to store packages in a content-addressable way.

Now, the same process which produces Stackage snapshots, can also:

  • Download all package versions and revisions from Hackage, and store them in a Pantry database.
  • Download all Stackage snapshots, and store them in the same Pantry database.
  • All the unique CAS blobs stored in the pantry database are then pushed to Casa, completing the circle.

Stack can now download all its assets needed to build a package from Casa:

  • Stackage snapshots.
  • Cabal files.
  • Individual package files.

Furthermore, the snapshot format of Stackage supports specifying locations other than Hackage, such as a git repository at a given commit, or a URL with a tarball. These would also be automatically pushed to Casa, and Stack would download them from Casa automatically like any other package. Parenthetically, Stackage does not currently include packages from outside of Hackage, but Stack's custom snapshots--which use the same format--do support that.

Internal Company Casas

Companies often run their own Hackage on their own network (or IP-limited public server) and upload their custom packages to it, to be used by everyone in the company.

With the advent of Stack, this became less needed because it's trivial to fork any package on GitHub and then link to the Git repo in a stack.yaml. Plus, it's more reproducible, because you refer to a hash rather than a mutable version. Combined with the additional Pantry-based SHA256+length described above, you don't have to trust GitHub to serve the right content, either.

The Casa repository is here which includes both the server and a (Haskell) client library with which you can push arbitrary files to the casa service. Additionally, to populate your Casa server with everything from a given snapshot, or all of Hackage, you can use casa-curator from the curator repo, which is what we use ourselves.

If you're a company interested in running your own Casa server, please contact us. Or, if you'd like to discuss the possibility of caching packages in binary form and therefore skipping the build step altogther, please contact us. Also contact us if you would like to discuss storing GHC binary releases into Casa and have Stack pull from it, to allow for a completely Casa-enabled toolchain.

Summary

Here's what we've brought to Haskell build infrastructure:

  • Reliable, reproducible referring to packages and their files.
  • De-duplication of package files; fewer things to download, on your dev machine or on CI.
  • An easy to use and rely on server.
  • A way to run an archive of your own that is trivial to run.

When you upgrade to Stack master or the next release of Stack, you will automatically be using the Casa server.

We believe this CAS architecture has use in other language ecosystems, not just Haskell. See the Casa post for more details.

December 16, 2019 12:13 PM

December 14, 2019

Donnacha Oisín Kidney

Lazy Constructive Numbers and the Stern-Brocot Tree

Posted on December 14, 2019
Tags: Haskell, Agda

In dependently typed languages, it’s often important to figure out a good “low-level� representation for some concept. The natural numbers, for instance:

For “real� applications, of course, these numbers are offensively inefficient, in terms of both space and time. But that’s not what I’m after here: I’m looking for a type which best describes the essence of the natural numbers, and that can be used to prove and think about them. In that sense, this representation is second to none: it’s basically the simplest possible type which can represent the naturals.

Let’s nail down that idea a little better. What do we mean when a type is a “good� representation for some concept.

  • There should be no redundancy. The type for the natural numbers above has this property: every natural number as one (and only one) canonical representative in Nat. Compare that to the following possible representation for the integers:

    There are two ways to represent 0 here: as Pos Z or Neg Z.

    Of course, you can quotient out the redundancy in Cubical Agda, or normalise on construction every time, but either of these workarounds gets your representation a demerit.

  • Operations should be definable simply and directly on the representation. Points docked for converting to and from some non-normalised form.

  • That conversion, however, can exist, and ideally should exist, in some fundamental way. You should be able to establish an efficient isomorphism with other representations of the same concept.

  • Properties about the type should correspond to intuitive properties about the representation. For Nat above, this means things like order: the usual order on the natural numbers again has a straightforward analogue on Nat.

With that laundry list of requirements, it’s no wonder that it’s often tricky to figure out the “right� type for a concept.

In this post, I’m going to talk about a type for the rational numbers, and I’m going to try satisfy those requirements as best I can.

The Rationals as a Pair of Numbers

Our first attempt at representing the rationals might use a fraction:

This obviously fails the redundancy property. The fractions <semantics>12<annotation encoding="application/x-tex">\frac{1}{2}</annotation></semantics> and <semantics>24<annotation encoding="application/x-tex">\frac{2}{4}</annotation></semantics> represent the same number, but have different underlying values.

So the type isn’t suitable as a potential representation for the rationals. That’s not to say that this type is useless: far from it! Indeed, Haskell’s Data.Ratio uses something quite like this to implement rationals.

If you’re going to deal with redundant elements, there are two broad ways to deal with it. Data.Ratio’s approach is to normalise on construction, and only export a constructor which does this. This gives you a pretty good guarantee that there won’t be any unreduced fractions lying around in you program. Agda’s standard library also uses an approach like this, although the fact that the numerator and denominator are coprime is statically verified by way of a proof carried in the type.

The other way to deal with redundancy is by quotient. In Haskell, that kind of means doing the following:

We don’t have real quotient types in Haskell, but this gets the idea across: we haven’t normalised our representation internally, but as far as anyone using the type is concerned, they shouldn’t be able to tell the difference between <semantics>12<annotation encoding="application/x-tex">\frac{1}{2}</annotation></semantics> and <semantics>24<annotation encoding="application/x-tex">\frac{2}{4}</annotation></semantics>.

Th Num instance is pretty much just a restating of the axioms for fractions.

Num instance for Frac.

Cubical Agda, of course, does have real quotient types. There, the Eq instance becomes a path constructor.

But we’ll leave the Agda stuff for another post.

The Rationals as a Trace of Euclid’s Algorithm

Now we get to the cool stuff. To reduce a fraction, we usually do something like getting the greatest common divisor of each operand. One nice way to do that is to use Euclid’s algorithm:

Let’s run that function on three different inputs: <semantics>23<annotation encoding="application/x-tex">\frac{2}{3}</annotation></semantics>, <semantics>46<annotation encoding="application/x-tex">\frac{4}{6}</annotation></semantics>, and <semantics>56<annotation encoding="application/x-tex">\frac{5}{6}</annotation></semantics>.

Those all return the right things, but that’s not what’s interesting here: look at the chain of comparison results. For the two fractions which are equivalent, their chains are equal.

This turns out to hold in general. Every rational number can be (uniquely!) represented as a list of bits, where each bit is a comparison result from Euclid’s algorithm.

And since we used unfoldr, it’s easy to reverse the algorithm to convert from the representation to a pair of numbers.

Now abs . rep is the identity function, and rep . abs reduces a fraction! We have identified an isomorphism between our type (a list of bits) and the rational numbers!

Well, between the positive rational numbers. Not to worry: we can add a sign before it. And, because our type doesn’t actually include 0, we don’t get the duplicate 0 problems we did with Int.

We can also define some operations on the type, by converting back and forth.

Rationals as a Path into The Stern-Brocot Tree

So we have a construction that has our desired property of canonicity. Even better, there’s a reasonably efficient algorithm to convert to and from it! Our next task will be examining the representation itself, and seeing what information we can get from it.

To do so we’ll turn to the subject of the title of this post: the Stern-Brocot tree.

The Stern-Brocot Tree. By Aaron Rotenberg, CC BY-SA 3.0, from Wikimedia Commons.
The Stern-Brocot Tree. By Aaron Rotenberg, CC BY-SA 3.0, from Wikimedia Commons.

This tree, pictured above, has some incredible properties:

  • It contains every rational number (in reduced form) exactly once.
  • It is a binary search tree.

Both of these properties make it an excellent candidate for basing a representation on. As it turns out, that’s what we already did! Our list of bits above is precisely a path into the Stern-Brocot tree, where every O is a left turn and every I right.

Incrementalising

The most important fact we’ve gleaned so far from the Stern-Brocot tree is that our representation is lexicographically ordered. While that may not seem like much, it turns our list of bits into a progressively-narrowing interval, which generates more and more accurate estimates of the true value. When we see a O at the head of the list, we know that the result must be smaller than 1; what follows will tell us on what side of <semantics>12<annotation encoding="application/x-tex">\frac{1}{2}</annotation></semantics> the answer lies, and so on.

This turns out to be quite a useful property: we often don’t need exact precision for some calculation, but rather some approximate answer. It’s even rarer still that we know exactly how much precision we need for a given expression (which is what floating point demands). Usually, the precision we need changes quite dynamically. If a particular number plays a more influential role in some expression, for instance, its precision is more important than the others!

By producing a lazy list of bits, however, we can allow the consumer to specify the precision they need, by demanding those bits as they go along. (In the literature, this kind of thing is referred to as “lazy exact arithmetic�, and it’s quite fascinating. The representation presented here, however, is not very suitable for any real computation: it’s incredibly slow. There is a paper on the topic: Niqui (2007), which examines the Stern-Brocot numbers in Coq).

In proofs, the benefit is even more pronounced: finding out that a number is in a given range by just inspecting the first element of the list gives an excellent recursion strategy. We can do case analysis on: “what if it’s 1�, “what if it’s less than 1�, and “what if it’s greater than 1�, which is quite intuitive.

There’s one problem: our evaluation function is defined as a foldr, and forces the accumulator at every step. We will need to figure out another evaluator which folds from the left.

Intervals

So let’s look more at the “interval� interpretation of the Stern-Brocot tree. The first interval is <semantics>(01,10)<annotation encoding="application/x-tex">\left(\frac{0}{1},\frac{1}{0}\right)</annotation></semantics>: neither of these values are actually members of the type, which is why we’re not breaking any major rules with the <semantics>10<annotation encoding="application/x-tex">\frac{1}{0}</annotation></semantics>. To move left (to <semantics>12<annotation encoding="application/x-tex">\frac{1}{2}</annotation></semantics> in the diagram), we need to use a peculiar operation called “child’s addition�, often denoted with a <semantics>⊕<annotation encoding="application/x-tex">\oplus</annotation></semantics>.

<semantics>ab⊕cd=a+cb+d<annotation encoding="application/x-tex"> \frac{a}{b} \oplus \frac{c}{d} = \frac{a+c}{b+d} </annotation></semantics>

The name comes from the fact that it’s a very common mistaken definition of addition on fractions.

Right, next steps: to move left in an interval, we do the following:

<semantics>left(��,��)=(��,��⊕��)<annotation encoding="application/x-tex"> \text{left} \left(\mathit{lb},\mathit{ub} \right) = \left( \mathit{lb}, \mathit{lb} \oplus \mathit{ub} \right) </annotation></semantics>

In other words, we narrow the right-hand-side of the interval. To move right is the opposite:

<semantics>right(��,��)=(��⊕��,��)<annotation encoding="application/x-tex"> \text{right} \left(\mathit{lb},\mathit{ub} \right) = \left( \mathit{lb} \oplus \mathit{ub} , \mathit{ub} \right) </annotation></semantics>

And finally, when we hit the end of the sequence, we take the mediant value.

<semantics>mediant(��,��)=��⊕��<annotation encoding="application/x-tex"> \text{mediant}\left(\mathit{lb} , \mathit{ub}\right) = \mathit{lb} \oplus \mathit{rb} </annotation></semantics>

From this, we get a straightforward left fold which can compute our fraction.

Monoids and Matrices

Before diving in and using this new evaluator to incrementalise our functions, let’s take a look at what’s going on behind the scenes of the “interval narrowing� idea.

It turns out that the “interval� is really a <semantics>2×2<annotation encoding="application/x-tex">2\times2</annotation></semantics> square matrix in disguise (albeit a little reordered).

<semantics>(ab,cd)=(cadb)<annotation encoding="application/x-tex"> \left( \frac{a}{b} , \frac{c}{d} \right) = \left( \begin{matrix} c & a \\ d & b \end{matrix} \right) </annotation></semantics>

Seen in this way, the beginning interval—<semantics>(01,10)<annotation encoding="application/x-tex">\left(\frac{0}{1} , \frac{1}{0}\right)</annotation></semantics>—is actually the identity matrix. Also, the two values in the second row of the tree correspond to special matrices which we will refer to as <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> and <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics>.

<semantics>L=(1011)R=(1101)<annotation encoding="application/x-tex"> L = \left( \begin{matrix} 1 & 0 \\ 1 & 1 \end{matrix} \right) \; R = \left( \begin{matrix} 1 & 1 \\ 0 & 1 \end{matrix} \right) </annotation></semantics>

It turns out that the left and right functions we defined earlier correspond to multiplication by these matrices.

<semantics>left(x)=xL<annotation encoding="application/x-tex"> \text{left}(x) = xL </annotation></semantics> <semantics>right(x)=xR<annotation encoding="application/x-tex"> \text{right}(x) = xR </annotation></semantics>

Since matrix multiplication is associative, what we have here is a monoid. mempty is the open interval at the beginning, and mappend is matrix multiplication. This is the property that lets us incrementalise the whole thing, by the way: associativity allows us to decide when to start and stop the calculation.

Incrementalising!

We now have all the parts we need. First, we will write an evaluator that returns increasingly precise intervals. Our friend scanl fits the requirement precisely.

Next, we will need to combine two of these lists with some operation on fractions.

The operation must respect orders in the proper way for this to be valid.

This pops one bit from each list in turn: one of the many possible optimisations would be to pull more information from the more informative value, in some clever way.

Finally, we have a function which incrementally runs some binary operator lazily on a list of bits.

The function only ever inspects the next bit when it absolutely needs to.

The helper function f here is the “incremental� version. p takes over when the precision of the input is exhausted.

We can use this to write an addition function (with some added special cases to speed things up).

We (could) also try and optimise the times we look for a new bit. Above we have noticed every case where one of the rationals is preceded by a whole part. After you encounter two Os, in addition if the two strings are inverses of each other the result will be 1. i.e. OOIOOI + OIOIIO = <semantics>11<annotation encoding="application/x-tex">\frac{1}{1}</annotation></semantics>. We could try and spot this, only testing with comparison of the mediant when the bits are the same. You’ve doubtless spotted some other possible optimisations: I have yet to look into them!

Inverting Functions

One of the other applications of lazy rationals is that they can begin to look like the real numbers. For instance, the p helper function above is basically defined extensionally. Instead of stating the value of the number, we give a function which tells us when we’ve made something too big or too small (which sounds an awful lot like a Dedekind cut to my ears). Here’s a function which inverts a given function on fractions, for instance.

Of course, the function has to satisfy all kinds of extra properties that I haven’t really thought a lot about yet, but no matter. We can use it to invert a squaring function:

And we can use this to get successive approximations to <semantics>2<annotation encoding="application/x-tex">\sqrt{2}</annotation></semantics>!

Conclusions and Related Work

Using the Stern-Brocot tree to represent the rationals was formalised in Coq in Bertot (2003). The corresponding lazy operations are formalised in QArith. Its theory and implementation is described in Niqui (2007). Unfortunately, I found most of the algorithms impenetrably complex, so I can’t really judge how they compare to the ones I have here.

I mentioned that one of the reasons you might want lazy rational arithmetic is that it can help with certain proofs. While this is true, in general the two main reasons people reach for lazy arithmetic is efficiency and as a way to get to the real numbers.

From the perspective of efficiency, the Stern-Brocot tree is probably a bad idea. You may have noticed that the right branch of the tree contains all the whole numbers: this means that the whole part is encoded in unary. Beyond that, we generally have to convert to some fraction in order to do any calculation, which is massively expensive.

The problem is that bits in the same position in different numbers don’t necessarily correspond to the same quantities. In base 10, for instance, the numbers 561 and 1024 have values in the “ones� position of 1 and 4, respectively. We can work with those two values independent of the rest of the number, which can lead to quicker algorithms.

Looking at the Stern-Brocot encoding, the numbers <semantics>23<annotation encoding="application/x-tex">\frac{2}{3}</annotation></semantics> and 3 are represented by OI and II, respectively. That second I in each, despite being in the same position, corresponds to different values: <semantics>13<annotation encoding="application/x-tex">\frac{1}{3}</annotation></semantics> in the first, and <semantics>32<annotation encoding="application/x-tex">\frac{3}{2}</annotation></semantics> in the second.

Solutions to both of these problems necessitate losing the one-to-one property of the representation. We could improve the size of the representation of terms by having our <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> and <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> matrices be the following (Krka 2014):

<semantics>L=(1012)R=(2101)<annotation encoding="application/x-tex"> L = \left( \begin{matrix} 1 & 0 \\ 1 & 2 \end{matrix} \right) \; R = \left( \begin{matrix} 2 & 1 \\ 0 & 1 \end{matrix} \right) </annotation></semantics>

But now there will be gaps in the tree. This basically means we’ll have to use infinite repeating bits to represent terms like <semantics>12<annotation encoding="application/x-tex">\frac{1}{2}</annotation></semantics>.

We could solve the other problem by throwing out the Stern-Brocot tree entirely and using a more traditional positional number system. Again, this introduces redundancy: in order to represent some fraction which doesn’t divide properly into the base of the number system you have to use repeating decimals.

The second reason for lazy rational arithmetic is that it can be a crucial component in building a constructive interpretation of the real numbers. This in particular is an area of real excitement at the moment: HoTT has opened up some interesting avenues that weren’t possible before for constructing the reals (Bauer 2016).

In a future post, I might present a formalisation of these numbers in Agda. I also intend to look at the dyadic numbers.

Update 26/12/2019: thanks Anton Felix Lorenzen and Joseph C. Sible for spotting some mistakes in this post.

References

Bauer, Andrej. 2016. “The real numbers in homotopy type theory.� Faro, Portugal. http://math.andrej.com/wp-content/uploads/2016/06/hott-reals-cca2016.pdf.

Bertot, Yves. 2003. “A simple canonical representation of rational numbers.� Electronic Notes in Theoretical Computer Science 85 (7). Mathematics, Logic and Computation (Satellite Event of ICALP 2003) (September): 1–16. doi:10.1016/S1571-0661(04)80754-0. http://www.sciencedirect.com/science/article/pii/S1571066104807540.

Krka, Petr. 2014. “Exact real arithmetic for interval number systems.� Theoretical Computer Science 542 (July): 32–43. doi:10.1016/j.tcs.2014.04.030. http://www.sciencedirect.com/science/article/pii/S0304397514003351.

Niqui, Milad. 2007. “Exact arithmetic on the SternBrocot tree.� Journal of Discrete Algorithms 5 (2). 2004 Symposium on String Processing and Information Retrieval (June): 356–379. doi:10.1016/j.jda.2005.03.007. http://www.sciencedirect.com/science/article/pii/S1570866706000311.

by Donnacha Oisín Kidney at December 14, 2019 12:00 AM

December 12, 2019

Gabriel Gonzalez

Prefer to use fail for IO exceptions

fail

This post briefly explains why I commonly suggest that people replace error with fail when raising IOExceptions.

The main difference between error and fail can be summarized by the following equations:

In other words, any attempt to evaluate an expression that is an error will raise the error. Evaluating an expression that is a fail does not raise the error or trigger any side effects.

Why does this matter? One of the nice properties of Haskell is that Haskell separates effect order from evaluation order. For example, evaluating a print statement is not the same thing as running it:

This insensitivity to evaluation order makes Haskell code easier to maintain. Specifically, this insensitivity frees us from concerning ourselves with evaluation order in the same way garbage collection frees us from concerning ourselves with memory management.

Once we begin using evaluation-sensitive primitives such as error we necessarily need to program with greater caution than before. Now any time we manipulate a subroutine of type IO a we need to take care not to prematurely force the thunk storing that subroutine.

How likely are we to prematurely evaluate a subroutine? Truthfully, not very likely, but fortunately taking the extra precaution to use fail is not only theoretically safer, it is also one character shorter than using error.

Limitations

Note that this advice applies solely to the case of raising IOExceptions within an IO subroutine. fail is not necessarily safer than error in other cases, because fail is a method of the MonadFail typeclass and the typeclass does not guarantee in general that fail is safe.

fail happens to do the correct thing for IO:

… but for other MonadFail instances fail could be a synonym for error and offer no additional protective value.

If you want to future-proof your code and ensure that you never use the wrong MonadFail instance, you can do one of two things:

  • Enable the TypeApplications language extension and write fail @IO string
  • Use Control.Exception.throwIO (userError string) instead of fail

However, even if you choose not to future-proof your code fail is still no worse than error in this regard.

by Gabriel Gonzalez (noreply@blogger.com) at December 12, 2019 04:59 PM

Brent Yorgey

Computing Eulerian paths is harder than you think

Everyone who has studied any graph theory at all knows the celebrated story of the Seven Bridges of Königsberg, and how Euler gave birth to modern graph theory while solving the problem.

Euler’s proof is clever, incisive, not hard to understand, and a great introduction to the kind of abstract reasoning we can do about graphs. There’s little wonder that it is often used as one of the first nontrivial graph theory results students are introduced to, e.g. in a discrete mathematics course. (Indeed, I will be teaching discrete mathematics in the spring and certainly plan to talk about Eulerian paths!)

Euler’s 1735 solution was not constructive, and in fact he really only established one direction of the “if and only if”:

If a graph has an Eulerian path, then it has exactly zero or two vertices with odd degree.

This can be used to rule out the existence of Eulerian paths in graphs without the right vertex degrees, which was Euler’s specific motivation. However, one suspects that Euler knew it was an if and only if, and didn’t write about the other direction (if a graph has exactly zero or two vertices with odd degree, then it has an Eulerian path) because he thought it was trivial.1

The first person to publish a full proof of both directions, including an actual algorithm for finding an Eulerian path, seems to be Carl Hierholzer, whose friend published a posthumous paper in Hierholzer’s name after his untimely death in 1871, a few weeks before his 31st birthday.2 (Notice that this was almost 150 years after Euler’s original paper!) If the vertex degrees cooperate, finding an Eulerian path is almost embarrassingly easy according to Hierholzer’s algorithm: starting at one of the odd-degree vertices (or anywhere you like if there are none), just start walking through the graph—any which way you please, it doesn’t matter!—visiting each edge at most once, until you get stuck. Then pick another part of the graph you haven’t visited, walk through it randomly, and splice that path into your original path. Repeat until you’ve explored the whole graph. And generalizing all of this to directed graphs isn’t much more complicated.

So, in summary, this is a well-studied problem, solved hundreds of years ago, that we present to students as a first example of a nontrivial yet still simple-to-understand graph proof and algorithm. So it should be pretty easy to code, right?

So what’s the problem?

Recently I came across the eulerianpath problem on Open Kattis, and I realized that although I have understood this algorithm on a theoretical level for almost two decades (I almost certainly learned it as a young undergraduate), I have never actually implemented it! So I set out to solve it.

Right away the difficulty rating of 5.7 tells us that something strange is going on. “Easy” problems—the kind of problems you can give to an undergraduate at the point in their education when they might first be presented with the problem of finding Eulerian paths—typically have a difficulty rating below 3. As I dove into trying to implement it, I quickly realized two things. First of all, given an arbitrary graph, there’s a lot of somewhat finicky work that has to be done to check whether the graph even has an Eulerian path, before running the algorithm proper:

  1. Calculate the degree of all graph vertices (e.g. by iterating through all the edges and incrementing appropriate counters for the endpoints of each edge).
  2. Check if the degrees satisfy Euler’s criteria for the existence of a solution, by iterating through all vertices and making sure their degrees are all even, but also counting the number of vertices with an odd degree to make sure it is either zero or two. At the same time, if we see an odd-degree vertex, remember it so we can be sure to start the path there.
  3. If all vertices have even degree, pick an arbitrary node as the start vertex.
  4. Ensure the graph is connected (e.g. by doing a depth-first search)—Euler kind of took this for granted, but this technically has to be part of a correct statement of the theorem. If we have a disconnected graph, each component could have an Eulerian path or cycle without the entire graph having one.

And if the graph is directed—as it is in the eulerianpath problem on Kattis—then the above steps get even more finicky. In step 1, we have to count the in- and outdegree of each vertex separately; in step 2, we have to check that the in- and outdegrees of all vertices are equal, except for possibly two vertices where one of them has exactly one more outgoing than incoming edge (which must be the start vertex), and vice versa for the other vertex; in step 4, we have to make sure to start the DFS from the chosen start vertex, because the graph need not be strongly connected, it’s enough for the entire graph to be reachable from the start vertex.

The second thing I realized is that Hierholzer’s algorithm proper—walk around until getting stuck, then repeatedly explore unexplored parts of the graph and splice them into the path being built—is still rather vague, and it’s nontrivial to figure out how to do it, and what data structures to use, so that everything runs in time linear in the number of edges. For example, we don’t want to iterate over the whole graph—or even just the whole path built so far—to find the next unexplored part of the graph every time we get stuck. We also need to be able to do the path splicing in constant time; so, for example, we can’t just store the path in a list or array, since then splicing in a new path segment would require copying the entire path after that point to make space. I finally found a clever solution that pushes the nodes being explored on a stack; when we get stuck, we start popping nodes, placing them into an array which will hold the final path (starting from the end), and keep popping until we find a node with an unexplored outgoing edge, then switch back into exploration mode, pushing things on the stack until we get stuck again, and so on. But this is also nontrivial to code correctly since there are many lurking off-by-one errors and so on. And I haven’t even talked about how we keep track of which edges have been explored and quickly find the next unexplored edge from a vertex.

I think it’s worth writing another blog post or two with more details of how the implementation works, both in an imperative language and in a pure functional language, and I may very well do just that. But in any case, what is it about this problem that results in such a large gap between the ease of understanding its solution theoretically, and the difficulty of actually implementing it?


  1. Actually, the way I have stated the other direction of the if and only if is technically false!—can you spot the reason why?↩

  2. Though apparently someone named Listing published the basic idea of the proof, with some details omitted, some decades earlier. I’ve gotten all this from Herbert Fleischner, Eulerian Graphs and Related Topics, Annals of Discrete Mathematics 45, Elsevier 1990. Fleischner reproduces Euler’s original paper as well as Hierholzer’s, together with English translations.↩

by Brent at December 12, 2019 01:19 PM

December 10, 2019

Joey Hess

announcing the filepath-bytestring haskell library

filepath-bytestring is a drop-in replacement for the standard haskell filepath library, that operates on RawFilePath rather than FilePath.

The benefit, of course, is speed. "foo" </> "bar" is around 25% faster with the new library. dropTrailingPathSeparator is 120% faster. But the real speed benefits probably come when a program is able to input filepaths as ByteStrings, manipulate them, and operate on the files, all without using String.

It's extensively tested, not only does it run all the same doctests that the filepath library does, but each function is quickchecked to behave the same as the equivilant function from filepath.

While I implemented almost everything, I did leave off some functions that operate on PATH, which seem unlikely to be useful, and the complicated normalise and stuff that uses it.

This work was sponsored by Jake Vosloo on Patron.

December 10, 2019 08:31 PM

Philip Wadler

Programming Languages for Trustworthy Systems

Image result for lfcs informatics edinburgh

The University of Edinburgh seeks to appoint a Lecturer/Senior Lecturer/Reader in Programming Languages for Trustworthy Systems.  An ideal candidate will be able to contribute and complement the expertise of the Programming Languages & Foundations Group which is part of the Laboratory for Foundations of Computer Science (LFCS).

The successful candidate will have a PhD, an established research agenda and the enthusiasm and ability to undertake original research, to lead a research group, and to engage with teaching and academic supervision, with expertise in at least one of the following:
  • Practical systems verification: e.g. for operating systems, databases, compilers, distributed systems
  • Language-based verification: static analysis, verified systems / smart contract programming, types, SAT/SMT solving
  • Engineering trustworthy software: automated/property-based testing, bug finding, dynamic instrumentation, runtime verification
We are seeking current and future leaders in the field.

Applications from individuals from underrepresented groups in Computer Science are encouraged.

Appointment will be full-time and open-ended.

The post is situated in the Laboratory for Foundations of Computer Science, the Institute in which the School's expertise in functional programming, logic and semantics, and theoretical computer science is concentrated.  Collaboration relating to PL across the School is encouraged and supported by the School's Programming Languages Research Programme, to which the successful applicant will be encouraged to contribute. Applicants whose PL-related research aligns well with particular strengths of the School, such as machine learning, AI, robotics, compilers, systems, and security, are encouraged to apply and highlight these areas of alignment.  

All applications must contain the following supporting documents:
• Teaching statement outlining teaching philosophy, interests and plans
• Research statement outlining the candidate’s research vision and future plans
• Full CV (resume) and publication list

The University job posting and submission site, including detailed application instructions, is at this link:


Applications close at 5pm GMT on January 31, 2020. This deadline is firm, so applicants are exhorted to begin their applications in advance.

Shortlisting for this post is due early February with interview dates for this post expected to fall in early April 2020. Feedback will only be provided to interviewed candidates. References will be sought for all shortlisted candidates. Please indicate on your application form if you are happy for your referees to be contacted.

Informal enquiries may be addressed to Prof Philip Wadler (wadler@inf.ed.ac.uk).

Lecturer Grade: UE08 (£41,526 - £49,553) 
Senior Lecturer or Reader Grade: UE09 (£52,559 - £59,135)

The School is advertising a number of positions, including this one, as described here:


About the Laboratory for Foundations of Computer Science

As one of the largest Institutes in the School of Informatics, and one of the largest theory research groups in the world, the Laboratory for Foundations of Computer Science combines expertise in all aspects of theoretical computer science, including algorithms and complexity, cryptography, database theory, logic and semantics, and quantum computing. The Programming Languages and Foundations group includes over 25 students, researchers and academic staff, working on functional programming, types, verification, semantics, software engineering, language-based security and new programming models. Past contributions to programming languages research originating at LFCS include Standard ML, the Edinburgh Logical Framework, models of concurrency such as the pi-calculus, and foundational semantic models of effects in programming languages, based on monads and more recently algebraic effects and handlers.

About the School of Informatics and University of Edinburgh

The School of Informatics at the University of Edinburgh is one of the largest in Europe, with more than 120 academic staff and a total of over 500 post-doctoral researchers, research students and support staff. Informatics at Edinburgh rated highest on Research Power in the most recent Research Excellence Framework. The School has strong links with industry, with dedicated business incubator space and well-established enterprise and business development programmes. The School of Informatics has recently established the Bayes Centre for Data Technology, which provide a locus for fruitful multi-disciplinary work, including a range of companies collocated in it. The School holds a Silver Athena SWAN award in recognition of our commitment to advance the representation of women in science, mathematics, engineering and technology. We are also Stonewall Scotland Diversity Champions actively promoting LGBT equality.

by Philip Wadler (noreply@blogger.com) at December 10, 2019 06:33 PM

December 09, 2019

Russell O'Connor

Stochastic Elections Canada 2019 Results

It is time to announce the results from Stochastic Elections Canada for the 43rd General Election.

Every vote counts with the stochastic election process, so we had to wait until all election results were validated before we could announce our results. However, stochastic election results are not very sensitive to small changes to the number of votes counted. The distributions for each candidate are typically only slightly adjusted.

Now we can announce our MP selection.

2019 Stochastic Election Simulation Results
Party Seats Seat Percentage Vote Percentage
Liberal 116 34.3% 33.1%
Conservative 102 30.2% 34.4%
NDP-New Democratic Party 61 18.0% 15.9%
Bloc Québécois 25 7.40% 7.69%
Green Party 23 6.80% 6.50%
People’s Party 6 1.78% 1.64%
Christian Heritage Party 1 0.296% 0.105%
Parti Rhinocéros 1 0.296% 0.0535%
Independent 3 0.888%

Results by province and by riding are available (electoral districts on page 2).

The results were generated from Elections Canada data. One hundred and eighty-one elected candidates differ from the actual 2019 election outcome.

The People’s Party holds the balance of power in this parliament. Assuming a Liberal party member becomes speaker of the house, that means the Liberals together with the Bloc Québécois and Green Party have 163 votes and the Conservative and NDP together have 163 votes. The People’s Party’s 6 votes that is enough to decide which side reaches 169.

The rise in the Green Party’s popular vote allowed them to gain more seats this election. The Green Party has close to the same number of seats as the Bloc Québécois which reflects the fact that they have close to the same popular vote, even though the Green Party’s votes are more dilluted throughout Canada. This illustrates how sortition is a form of proportional electoral system.

Many proportional election systems require candidates to run under a party, or at least it is advantageous to be a run under a party. One notable advantage of sortition is that independent or unaffiliated candidates are not disadvantaged. While we did not select Jody Wilson-Raybould for her riding, Jane Philpott was elected to Markham—Stouffville. Also Archie MacKinnon was elected to Sydney—Victoria. And, with sortition, even the 396 residents of Miramichi—Grand Lake get a turn to have their choice of Mathew Grant Lawson to represent them in parliament.

This is only one example of the results of a stochastic election. Because of the stochastic nature of the election process, actual results may differ.

In Canada’s election process, it is sometimes advantageous to not vote for one’s preferred candidate. The stochastic election system is the only system in which it always best to vote for your preferred candidate. Therefore, if the 2019 election were actually using a stochastic election system, people would be allowed to vote for their true preferences. The outcome could be somewhat different than what this simulation illustrates.

Related info

December 09, 2019 08:31 PM

Philip Wadler

Election Special: Antisemitism


In the run-up to the election, I am passing on a couple of resources in case readers find them of value.

Antisemitism has been so much in the news that everyone must believe there cannot be smoke without fire. But if you dig into the allegations, it becomes clear that a minuscule flame has been fanned for political purposes.
How Labour Became “Antisemitic”

Ever since his shock election to the Labour leadership in 2015, Jeremy Corbyn has been dogged by allegations of “antisemitism.” Both the media and hostile MPs claim he has failed to confront Jew-hate in party ranks — one Tory minister even said Corbyn would be “the first antisemitic Western leader since 1945.” Often bound up with debates on Israel and anti-imperialism, this has become one of the main lines of attack against Corbyn, both within and outside the party.
Yet for all the headlines about “mounting antisemitism” in Labour, we are rarely given any sense of its scale. Data released by the party in February 2019 showed that it had received 1,106 specific complaints of antisemitism since April 2018, of which just 673 regarded actual Labour members. The party membership stands at over half a million: the allegations, even if they were true, concern around 0.1 percent of the total.
Constant media talk of Labour’s “antisemitism crisis” has nonetheless warped all discussion of this issue. This is a key finding of Bad News for Labour, a new book on the party’s handling of antisemitism claims. The study is especially notable for its use of focus groups and polling to gauge public perceptions of the affair: when its authors commissioned Survation to ask 1,009 people how many Labour members faced antisemitism complaints, the average estimate — at 34 percent — was over three hundred times the published figures.

by Philip Wadler (noreply@blogger.com) at December 09, 2019 12:21 PM

Election Special: NHS


In the run-up to the election, I am passing on a couple of resources in case readers find them of value.

Must watch film - now offered free in time for General Election 2019.

The Great NHS Heist forensically examines not only how the Tories plan to sell off the NHS but how they are already well advanced in smashing it apart, selling off the fragments piece by piece.

by Philip Wadler (noreply@blogger.com) at December 09, 2019 11:51 AM

December 07, 2019

Magnus Therning

December 06, 2019

Chris Smith 2

My Takeaways: Fernando Alegre’s talk on CodeWorld in Louisiana

On Wednesday evening, Fernando Alegre spoke to the New York Haskell User Group about using CodeWorld to teach Haskell to high school students in Louisiana. Most of the students have no prior programming experience, and he’s been very successful at quite a large scale. The talk is on YouTube.

https://medium.com/media/b4bd3826241da1ee8984b2f5a55a5d01/href

You should absolutely watch Fernando’s talk yourself. But here are the big points I took away from it.

#1: The scale of LSU’s work is very impressive.

The sheer scale of what Fernando is doing is staggering here. They have expanded from one high school in Baton Rouge in 2016 to more schools than we could count on his map. They are reaching about 800 students this school year, and it’s been doubling every year. Next year, with increased funding from several grants, they plan to triple the number to close to 2500 students per year. This is the major leagues, here!

There’s one big problem here: finding and keeping that many teachers is hard. Fernando mentioned that their number of students has been doubling every year, but the number of teachers has been growing linearly. Some of the issues here are:

  • Professional development costs money. While they’ve been able to fund teachers to attend so far, that has its limits. This is not a one-day retreat, either! LSU’s teachers attend a 6 week intensive training program, along with continuous online support after the main training session.
  • Teachers lack technical background. Because so many teachers are required, LSU cannot have too many prerequisites. The teachers often lack any past computer science experience, and don’t necessarily teach math, science, or anything technical. Some are afraid of math coming in! Fernando mentioned at one point that while the students are the ultimate goal, he sees teachers as his main challenge.
  • Teacher attrition is an issue. Louisiana schools also face a shortage of math and science teachers, and those who complete LSU’s training program for computational thinking often still find themselves back in math and science classrooms, instead. Several teachers have also left teaching to use their new skills themselves in the software industry.

There is also a lot of support work needed for a project of this size. Fernando mentioned they are looking to hire an entry-level software engineer, preferably working in Haskell, to help with the infrastructure needs of the project.

#2: The stakes are high.

Fernando talked for a while about their EIR research grant (search for Louisiana State University here for details), which is connected to the What Works Clearinghouse. What Works Clearinghouse is a database maintained by the U.S. Department of Education about which educational techniques do or don’t have research-supported impacts on student achievement.

A lot of approaches to teaching K-12 computer science have been studied in the past, and the results have not often impressed. Logo, Scratch, and others all have most research showing small to no gains in student achievement. Bootstrap, which is similar in many ways, has published research showing gains in student achievement on certain math word problems, but it was a smaller study with limited scope. In general, this problem is hard, and many have failed.

If the results here are different, it could make a big impact. This could be a large-scale research project demonstrating both success and transfer learning from computer science and functional programming to other fields (especially mathematics), and it could have a major impact in education. On the other hand, if the study produces no evidence of learning, that could matter as well!

Fernando told a powerful cautionary tale about Logo and Seymour Papert. Papert captivated a large part of the education community with his Logo language, which I even learned in my own elementary school education. Roy Pea followed up with empirical research, which showed little to no measurable learning gains to support Papert’s claims about its educational benefits. Fernando suspects that the problem here was that the curriculum was too open-ended. Pure discovery learning, with little or no direct instruction or guidance, tends to only work for students with a lot of confidence and background knowledge, and Papert was a consummate believer in discovery learning. Yet, the results tarnished not just Papert’s own teaching style, but also Logo as a language, and even computing in the elementary classroom in general.

Similarly, this study is about whether LSU’s computational thinking curriculum works. But the results are likely to be cited for broader questions. Does CodeWorld work in K-12? Does Haskell work in K-12? Does functional programming work in K-12?

And there are definitely challenges here: The teachers are not experts, the curriculum is new and in flux, and there’s not even a standard way to measure student achievement in computational thinking in the first place! Answers to these questions are now more urgent than they were before.

#3: LSU has empirical support for Haskell in computational thinking.

Fernando laid out how LSU is building a full set of four “STEM pathways” that high school students can follow to earn endorsements on their high school diplomas. The four pathways include pre-engineering, computation, digital design, and biomedical sciences. Each of these pathways will include some technology classes and often their own programming languages and tools — whether that’s JavaScript and Unity for digital design, or R for statistical analysis in biomedical sciences, or Arduino for building smart devices. The decision was made to add one class at the beginning that’s a common introduction to computational thinking before splitting into domain-specific directions. And that’s where Haskell and CodeWorld come in.

Fernando noted that they have seen the expected resistance to the choice of Haskell for the introductory language. Some of this resistance comes from familiarity, as stakeholders wonder what’s wrong with just using Python like “everyone else”. Some comes from more substantive objections, as well, and Fernando talked a bit about how CodeWorld’s functions are sometimes harder to read than object-oriented dot notation because of how they separate operations like translation and rotation from the data they use. Despite the objections, though, they’ve seen some surprising support:

  • Most teachers in their 6-week summer training program panic around week 2. By week 6, though, they’ve become huge fans of the tools, and advocate strongly for them.
  • Students, too, fall in love with it. Fernando shared a story of a teacher who first taught computational thinking with Haskell & CodeWorld, then later an AP Computer Science Principles class in the same school, using MIT’s Scratch. Students set up protests and chanted “CodeWorld! CodeWorld!” at various points in the class, begging for a return to the Haskell tool.

Another key part of the decision was a desire to integrate with mathematics education. Louisiana schools are struggling with math achievement, and LSU specifically designed the curriculum to include a lot of mathematical thinking as well. This was great to hear, since building math skills was my fundamental goal in the CodeWorld project in the first place. A big part of the EIR grant is to observe and quantify this effect. My choice of Haskell for CodeWorld, in the beginning, came from the observation that writing Haskell feels like writing mathematics in a way that’s true of no other language or tool that I know.

#4: Learning progressions and stories are important.

One of the best moments of Fernando’s talk, and one that drew spontaneous applause, was the progression of generating a star from overlapping triangles. The progression (which I’m modifying a bit) goes like this:

Punchline: Suddenly, you can change the number of points! https://code.world/#PyeIKl57dFGPYsub71Ly_Qg

The point here (pun intended) is that by communicating the structure behind numbers with an expression, you can capture repeated reasoning, generalize it, and then extend your generalizations. By removing that information from your notation (for example, by simplifying arithmetic), you lose the opportunity. This is similar to John Mason’s writing on tracking arithmetic, which I’ve blogged about here in the past.

Fernando also talks about how he uses randomness and interactivity to prompt students to generalize and parameterize their thinking. This is all good stuff.

#5: There’s still a long way to go.

One thing that became clear as Fernando finished is that he’s still struggling with how to convey some ideas. Interacting with the outside world is difficult from CodeWorld’s purely functional models. Students and teachers alike find CodeWorld’s stateful activities to be too difficult to use.

The good news here is that for an introductory course, these things aren’t as important. But they matter eventually. How do we help the teacher who wants to build complex demonstrations, or do statistics with outside data sets, or build stateful models of economics or physics systems? What does an “advanced” computational thinking course with Haskell or CodeWorld look like? These are open questions, and Fernando presented some compelling (and controversial) ideas.

Hope you have the chance to watch, and share your thoughts.

by Chris Smith at December 06, 2019 09:44 PM

Matt Parsons

Splitting Persistent Models

Reddit user /u/qseep made a comment on my last blog post, asking if I had any advice for splitting up persistent model definitions:

A schema made using persistent feels like a giant Types module. One change to an entity definition requires a recompile of the entire schema and everything that depends on it. Is there a similar process to break up a persistent schema into pieces?

Yes! There is. In fact, I’ve been working on this at work, and it’s made a big improvement in our overall compile-times. I’m going to lay out the strategy and code here to make it all work out.

You’d primarily want to do this to improve compilation times, though it’s also logically nice to “only import what you need” I guess.

Starting Point and Background

persistent is a database library in Haskell that focuses on rapid productivity and iteration with relatively simple types. It’s heavier than a raw SQL library, but much simpler than something like opaleye or beam. It also offers less features and type-safety than those two libraries. Trade-offs abound!

Usually, persistent users will define the models/tables for their database using the persistent QuasiQuoter language. The examples in the Persistent chapeter in the Yesod book use the QuasiQuoter directly in line with the Haskell code:

share [mkPersist sqlSettings, mkMigrate "migrateAll"] [persistLowerCase|
Person
    name String
    age Int Maybe
    deriving Show
BlogPost
    title String
    authorId PersonId
    deriving Show
|]

The Yesod scaffold, however, loads a file:

-- You can define all of your database entities in the entities file.
-- You can find more information on persistent and how to declare entities
-- at:
-- http://www.yesodweb.com/book/persistent/
share [mkPersist sqlSettings, mkMigrate "migrateAll"]
    $(persistFileWith lowerCaseSettings "config/models")

For smaller projects, I’d recommend using the QuasiQuoter - it causes less problems with GHCi (no need to worry about relative file paths). Once the models file gets big, compilation will become slow, and you’ll want to split it into many files. I investigated this slowness to see what the deal was, initially suspecting that the Template Haskell code was slowing things down. What I found was a little surprising: for a 1,200 line models file, we were spending less than a second doing TemplateHaskell. The rest of the module would take several minutes to compile, largely because the generated module was over 390,000 lines of code, and GHC is superlinear in compiling large modules.

Another reason to split it up is to avoid GHCi linker issues. GHCi can exhaust linker ticks (or some other weird finite resource?) when compiling a module, and it will do this when you get more than ~1,000 lines of models (in my experience).

Split Up Approaches

I am aware of two approaches for splitting up the modules - one uses the QuasiQuoter, and the other uses external files for compilation. We’ll start with external files, as it works best with persistent migrations and requires the least amount of fallible human error checking.

Separate Files

I prepared a GitHub pull request that demonstrates the changes in this section. Follow along for exactly what I did:

In the Yesod scaffold, you have a config/models file which contains all of the entity definitions. We’re going to rename the file to config/models_backup, and we’re going to create a folder config/models/ where we will put the new entity files. For consistency/convention, we’re going to name the files ${EntityName}.persistentmodels, so we’ll end up with this directory structure:

config
└── models
    ├── Comment.persistentmodels
    ├── Email.persistentmodels
    └── User.persistentmodels

Now, we’re going to create a Haskell file for each models file.

{-# LANGUAGE EmptyDataDecls             #-}
{-# LANGUAGE FlexibleInstances          #-}
{-# LANGUAGE GADTs                      #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE MultiParamTypeClasses      #-}
{-# LANGUAGE NoImplicitPrelude          #-}
{-# LANGUAGE OverloadedStrings          #-}
{-# LANGUAGE TemplateHaskell            #-}
{-# LANGUAGE TypeFamilies               #-}

module Model.User where

import ClassyPrelude.Yesod
import Database.Persist.Quasi

share [mkPersist sqlSettings]
    $(persistFileWith lowerCaseSettings "config/models/User.persistentmodels")

So far, so good! The contents of the User.persistentmodels file only has the entity definition for the User table:

-- config/models/User.persistentmodels
User
    ident Text
    password Text Maybe
    UniqueUser ident
    deriving Typeable

Next up, we’ll do Email, which is defined like this:

Email
    email Text
    userId UserId Maybe
    verkey Text Maybe
    UniqueEmail email

Email refers to the UserId type, which is defined in Model.User. So we need to add that import to the Model.Email module.

{-# LANGUAGE EmptyDataDecls             #-}
{-# LANGUAGE FlexibleInstances          #-}
{-# LANGUAGE GADTs                      #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE MultiParamTypeClasses      #-}
{-# LANGUAGE NoImplicitPrelude          #-}
{-# LANGUAGE OverloadedStrings          #-}
{-# LANGUAGE TemplateHaskell            #-}
{-# LANGUAGE TypeFamilies               #-}

module Model.Email where

import ClassyPrelude.Yesod
import Database.Persist.Quasi

import Model.User

share [mkPersist sqlSettings]
    $(persistFileWith lowerCaseSettings "config/models/Email.persistentmodels")

We need to do the same thing for the Comment type and module.

Now, we have a bunch of modules that are defining our data entities. You may want to reexport them all from the top-level Model module, or you may choose to have finer grained imports. Either way has advantages and disadvantages.

Migrations

Let’s get those persistent migrations back. If you’re not using persistent migrations, then you can just skip this bit. We’ll define a new module, Model.Migration, which will load up all the *.persistentmodels files and make a migration out of them.

{-# LANGUAGE EmptyDataDecls             #-}
{-# LANGUAGE FlexibleInstances          #-}
{-# LANGUAGE GADTs                      #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE MultiParamTypeClasses      #-}
{-# LANGUAGE NoImplicitPrelude          #-}
{-# LANGUAGE OverloadedStrings          #-}
{-# LANGUAGE TemplateHaskell            #-}
{-# LANGUAGE TypeFamilies               #-}

module Model.Migration where

import System.Directory
import ClassyPrelude.Yesod
import Database.Persist.Quasi

mkMigrate "migrateAll" $(do
    files <- liftIO $ do
        dirContents <- getDirectoryContents "config/models/"
        pure $ map ("config/models/" <>) $ filter (".persistentmodels" `isSuffixOf`) dirContents
    persistManyFileWith lowerCaseSettings files
    )

Some tricks here:

  1. You can write do notation in a TemplateHaskell splice, because Q is a monad, and a splice only expects that the result have Q splice where splice depends on syntactically where it’s going. Here, we have Q Exp because it’s used in an expression context.
  2. We do a relatively simple scan - get directory contents for our models, then filter to the suffix we care about, and then map the full directory path on there.
  3. Finally we call persistManyFileWith, which takes a list of files and parses it into the [EntityDef].

Now we’ve got migrations going, and our files are split up. This speeds up compilation quite a bit.

QuasiQuotes

If you’re not using migrations, this approach has a lot less boilerplate and extra files you have to mess about with. However, the migration story is a little more complicated.

Basically, you just put your QuasiQuote blocks in separate Haskell modules, and import the types you need for the references to work out. Easy-peasy!

Migrations

Okay, so this is where it gets kind of obnoxious.

In each quasiquote block, you have it generate migrations for your types. Then, you’ll make a Model.Migration file, which will need to import the migrations from each of your Model files and then run them in order. You need to manually topologically sort the migrations based on references, or it will try to create eg foreign keys to tables you haven’t created yet, which borks the migrations.

At least, I think that’s what you’d need to do - that’s about where I got when exploring this method on the work codebase, and I decided against it because it seemed less flexible and useful than the above approach since we use the persistent migrations.

December 06, 2019 12:00 AM

December 05, 2019

Michael Snoyman

Tokio 0.2 - Rust Crash Course lesson 9

In the previous lesson in the crash course, we covered the new async/.await syntax stabilized in Rust 1.39, and the Future trait which lives underneath it. This information greatly supercedes the now-defunct lesson 7 from last year, which covered the older Future approach.

Now it’s time to update the second half of lesson 7, and teach the hot-off-the-presses Tokio 0.2 release. For those not familiar with it, let me quote the project’s overview:

Tokio is an event-driven, non-blocking I/O platform for writing asynchronous applications with the Rust programming language.

If you want to write an efficient, concurrent network service in Rust, you’ll want to use something like Tokio. That’s not to say that this is the only use case for Tokio; you can do lots of great things with an event driven scheduler outside of network services. It’s also not to say that Tokio is the only solution; the async-std library provides similar functionality.

However, network services are likely the most common domain agitating for a non-blocking I/O system. And Tokio is the most popular and established of these systems today. So this combination is where we’re going to get started.

And as a side note, if you have some other topic you’d like me to cover around this, please let me know on Twitter.

Exercise solutions will be included at the end of the blog post. Yes, I keep changing the rules, sue me.

This post is part of a series based on teaching Rust at FP Complete. If you’re reading this post outside of the blog, you can find links to all posts in the series at the top of the introduction post. You can also subscribe to the RSS feed.

Hello Tokio!

Let’s kick this off. Go ahead and create a new Rust project for experimenting:

$ cargo new --bin usetokio

If you want to make sure you’re using the same compiler version as me, set up your rust-toolchain correctly:

$ echo 1.39.0 > rust-toolchain

And then set up Tokio as a dependency. For simplicity, we’ll install all the bells and whistles. In your Cargo.toml:

[dependencies]
tokio = { version = "0.2", features = ["full"] }

PROTIP You can run cargo build now to kick off the download and build of crates while you keep reading…

And now we’re going to write an asynchronous hello world application. Type this into your src/main.rs:

use tokio::io;

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    let mut stdout = io::stdout();
    let mut hello: &[u8] = b"Hello, world!\n";
    io::copy(&mut hello, &mut stdout).await?;
    Ok(())
}

NOTE I specifically said “type this in” instead of “copy and paste.” For getting comfortable with this stuff, I recommend manually typing in the code.

A lot of this should look familiar from our previous lesson. To recap:

  • Since we’ll be awaiting something and generating a Future, our main function is async.
  • Since main is async, we need to use an executor to run it. That’s why we use the #[tokio::main] attribute.
  • Since performing I/O can fail, we return a Result.

The first really new thing since last lesson is this little bit of syntax:

.await?

I mentioned it last time, but now we’re seeing it in real life. This is just the combination of our two pieces of prior art: .await for chaining together Futures, and ? for error handling. The fact that these work together so nicely is really awesome. I’ll probably mention this a few more times, because I love it that much.

The next thing to note is that we use tokio::io::stdout() to get access to some value that lets us interact with standard output. If you’re familiar with it, this looks really similar to std::io::stdout(). That’s by design: a large part of the tokio API is simply async-ifying things from std.

And finally, we can look at the actual tokio::io::copy call. As you may have guessed, and as stated in the API docs:

This is an asynchronous version of std::io::copy.

However, instead of working with the Read and Write traits, this works with their async cousins: AsyncRead and AsyncWrite. A byte slice (&[u8]) is a valid AsyncRead, so we’re able to store our input there. And as you may have guessed, Stdout is an AsyncWrite.

EXERCISE 1 Modify this application so that instead of printing “Hello, world!”, it copies the entire contents of standard input to standard output.

NOTE You can simplify this code using stdout.write_all after useing tokio::io::AsyncWriteExt, but we’ll stick to tokio::io::copy, since we’ll be using it throughout. But if you’re curious:

use tokio::io::{self, AsyncWriteExt};

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    let mut stdout = io::stdout();
    stdout.write_all(b"Hello, world!\n").await?;
    Ok(())
}

Spawning processes

Tokio provides a tokio::process module which resembles the std::process module. We can use this to implement Hello World once again:

use tokio::process::Command;

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    Command::new("echo").arg("Hello, world!").spawn()?.await?;
    Ok(())
}

Notice how the ? and .await bits can go in whatever order they are needed. You can read this line as:

  • Create a new Command to run echo
  • Give it the argument "Hello, world!"
  • Spawn this, which may fail
  • Using the first ?: if it fails, return the error. Otherwise, return a Future
  • Using the .await: wait until that Future completes, and capture its Result
  • Using the second ?: if that Result is Err, return that error.

Pretty nice for a single line!

One of the great advantages of async/.await versus the previous way of doing async with callbacks is how easily it works with looping.

EXERCISE 2 Extend this example so that it prints Hello, world! 10 times.

Take a break

So far we’ve only really done a single bit of .awaiting. But it’s easy enough to .await on multiple things. Let’s use delay_for to pause for a bit.

use tokio::time;
use tokio::process::Command;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    Command::new("date").spawn()?.await?;
    time::delay_for(Duration::from_secs(1)).await;
    Command::new("date").spawn()?.await?;
    time::delay_for(Duration::from_secs(1)).await;
    Command::new("date").spawn()?.await?;
    Ok(())
}

We can also use the tokio::time::interval function to create a stream of “ticks” for each time a certain amount of time has passed. For example, this program will keep calling date once per second until it is killed:

use tokio::time;
use tokio::process::Command;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    let mut interval = time::interval(Duration::from_secs(1));
    loop {
        interval.tick().await;
        Command::new("date").spawn()?.await?;
    }
}

EXERCISE 3 Why isn’t there a Ok(()) after the loop?

Time to spawn

This is all well and good, but we’re not really taking advantage of asynchronous programming at all. Let’s fix that! We’ve seen two different interesting programs:

  1. Infinitely pausing 1 seconds and calling date
  2. Copying all input from stdin to stdout

It’s time to introduce spawn so that we can combine these two into one program. First, let’s demonstrate a trivial usage of spawn:

use std::time::Duration;
use tokio::process::Command;
use tokio::task;
use tokio::time;

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    task::spawn(dating()).await??;
    Ok(())
}

async fn dating() -> Result<(), std::io::Error> {
    let mut interval = time::interval(Duration::from_secs(1));
    loop {
        interval.tick().await;
        Command::new("date").spawn()?.await?;
    }
}

You may be wondering: what’s up with that ?? operator? Is that some special super-error handler? No, it’s just the normal error handling ? applied twice. Let’s look at some type signatures to help us out here:

pub fn spawn<T>(task: T) -> JoinHandle<T::Output>;

impl<T> Future for JoinHandle<T> {
    type Output = Result<T, JoinError>;
}

Calling spawn gives us back a JoinHandle<T::Output>. In our case, the Future we provide as input is dating(), which has an output of type Result<(), std::io::Error>. So that means the type of task::spawn(dating()) is JoinHandle<Result<(), std::io::Error>>.

We also see that JoinHandle implements Future. So when we apply .await to this value, we end up with whatever that type Output = Result<T, JoinError> thing is. Since we know that T is Result<(), std::io::Error>, this means we end up with Result<Result<(), std::io::Error>, JoinError>.

The first ? deals with the outer Result, exiting with the JoinError on an Err, and giving us a Result<(), std::io::Error> value on Ok. The second ? deals with the std::io::Error, giving us a () on Ok. Whew!

EXERCISE 4 Now that we’ve seen spawn, you should modify the program so that it calls both date in a loop, and copies stdin to stdout.

Synchronous code

You may not have the luxury of interacting exclusively with async-friendly code. Maybe you have some really nice library you want to leverage, but it performs blocking calls internally. Fortunately, Tokio’s got you covered with the spawn_blocking function. Since the docs are so perfect, let me quote them:

The task::spawn_blocking function is similar to the task::spawn function discussed in the previous section, but rather than spawning an non-blocking future on the Tokio runtime, it instead spawns a blocking function on a dedicated thread pool for blocking tasks.

EXERCISE 5 Rewrite the dating() function to use spawn_blocking and std::thread::sleep so that it calls date approximately once per second.

Let’s network!

I could keep stepping through the other cools functions in the Tokio library. I encourage you to poke around at them yourself. But I promised some networking, and by golly, I’m gonna deliver!

I’m going to slightly extend the example from the TcpListener docs to (1) make it compile and (2) implement an echo server. This program has a pretty major flaw in it though, I recommend trying to find it.

use tokio::io;
use tokio::net::{TcpListener, TcpStream};

#[tokio::main]
async fn main() -> io::Result<()> {
    let mut listener = TcpListener::bind("127.0.0.1:8080").await?;

    loop {
        let (socket, _) = listener.accept().await?;
        echo(socket).await?;
    }
}

async fn echo(socket: TcpStream) -> io::Result<()> {
    let (mut recv, mut send) = io::split(socket);
    io::copy(&mut recv, &mut send).await?;
    Ok(())
}

We use TcpListener to bind a socket. The binding itself is asynchronous, so we use .await to wait for the listening socket to be available. And we use ? to deal with any errors while binding the listening socket.

Next, we loop forever. Inside the loop, we accept new connections, using .await? like before. We capture the socket (ignoring the address as the second part of the tuple). Then we call our echo function and .await it.

Within echo, we use tokio::io::split to split up our TcpStream into its constituent read and write halves, and then pass those into tokio::io::copy, as we’ve done before.

Awesome! Where’s the bug? Let me ask you a question: what should the behavior be if a second connection comes in while the first connection is still active? Ideally, it would be handled. However, our program has just one task. And that task .awaits on each call to echo. So our second connection won’t be serviced until the first one closes.

EXERCISE 6 Modify the program above so that it handles concurrent connections correctly.

TCP client and ownership

Let’s write a poor man’s HTTP client. It will establish a connection to a hard-coded server, copy all of stdin to the server, and then copy all data from the server to stdout. To use this, you’ll manually type in the HTTP request and then hit Ctrl-D for end-of-file.

use tokio::io;
use tokio::net::TcpStream;

#[tokio::main]
async fn main() -> io::Result<()> {
    let stream = TcpStream::connect("127.0.0.1:8080").await?;
    let (mut recv, mut send) = io::split(stream);
    let mut stdin = io::stdin();
    let mut stdout = io::stdout();

    io::copy(&mut stdin, &mut send).await?;
    io::copy(&mut recv, &mut stdout).await?;

    Ok(())
}

That’s all well and good, but it’s limited. It only handles half-duplex protocols like HTTP, and doesn’t actually support keep-alive in any way. We’d like to use spawn to run the two copys in different tasks. Seems easy enough:

let send = spawn(io::copy(&mut stdin, &mut send));
let recv = spawn(io::copy(&mut recv, &mut stdout));

send.await??;
recv.await??;

Unfortunately, this doesn’t compile. We get four nearly-identical error messages. Let’s look at the first:

error[E0597]: `stdin` does not live long enough
  --> src/main.rs:12:31
   |
12 |     let send = spawn(io::copy(&mut stdin, &mut send));
   |                      ---------^^^^^^^^^^------------
   |                      |        |
   |                      |        borrowed value does not live long enough
   |                      argument requires that `stdin` is borrowed for `'static`
...
19 | }
   | - `stdin` dropped here while still borrowed

Here’s the issue: our copy Future does not own the stdin value (or the send value, for that matter). Instead, it has a (mutable) reference to it. That value remains in the main function’s Future. Ignoring error cases, we know that the main function will wait for send to complete (thanks to send.await), and therefore the lifetimes appear to be correct. However, Rust doesn’t recognize this lifetime information. (Also, and I haven’t thought this through completely, I’m fairly certain that send may be dropped earlier than the Future using it in the case of panics.)

In order to fix this, we need to convince the compiler to make a Future that owns stdin. And the easiest way to do that here is to use an async move block.

Exercise 7 Make the code above compile using two async move blocks.

Playing with lines

This section will have a series of modifications to a program. I recommend you solve each challenge before looking at the solution. However, unlike the other exercises, I’m going to show the solutions inline since they build on each other.

Let’s build an async program that counts the number of lines on standard input. You’ll want to use the lines method for this. Read the docs and try to figure out what uses and wrappers will be necessary to make the types line up.

use tokio::prelude::*;
use tokio::io::AsyncBufReadExt;

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    let stdin = io::stdin();
    let stdin = io::BufReader::new(stdin);
    let mut count = 0u32;
    let mut lines = stdin.lines();
    while let Some(_) = lines.next_line().await? {
        count += 1;
    }
    println!("Lines on stdin: {}", count);
    Ok(())
}

OK, bumping this up one more level. Instead of standard input, let’s take a list of file names as command line arguments, and count up the total number of lines in all the files. Initially, it’s OK to read the files one at a time. In other words: don’t bother calling spawn. Give it a shot, and then come back here:

use tokio::prelude::*;
use tokio::io::AsyncBufReadExt;

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    let mut args = std::env::args();
    let _me = args.next(); // ignore command name
    let mut count = 0u32;

    for filename in args {
        let file = tokio::fs::File::open(filename).await?;
        let file = io::BufReader::new(file);
        let mut lines = file.lines();
        while let Some(_) = lines.next_line().await? {
            count += 1;
        }
    }

    println!("Total lines: {}", count);
    Ok(())
}

But now it’s time to make this properly asynchronous, and process the files in separate spawned tasks. In order to make this work, we need to spawn all of the tasks, and then .await each of them. I used a Vec of Future<Output=Result<u32, std::io::Error>>s for this. Give it a shot!

use tokio::prelude::*;
use tokio::io::AsyncBufReadExt;

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    let mut args = std::env::args();
    let _me = args.next(); // ignore command name
    let mut tasks = vec![];

    for filename in args {
        tasks.push(tokio::spawn(async {
            let file = tokio::fs::File::open(filename).await?;
            let file = io::BufReader::new(file);
            let mut lines = file.lines();
            let mut count = 0u32;
            while let Some(_) = lines.next_line().await? {
                count += 1;
            }
            Ok(count) as Result<u32, std::io::Error>
        }));
    }

    let mut count = 0;
    for task in tasks {
        count += task.await??;
    }

    println!("Total lines: {}", count);
    Ok(())
}

And finally in this progression: let’s change how we handle the count. Instead of .awaiting the count in the second for loop, let’s have each individual task update a shared mutable variable. You should use an Arc<Mutex<u32>> for that. You’ll still need to keep a Vec of the tasks though to ensure you wait for all files to be read.

use tokio::prelude::*;
use tokio::io::AsyncBufReadExt;
use std::sync::Arc;

// avoid thread blocking by using Tokio's mutex
use tokio::sync::Mutex;

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    let mut args = std::env::args();
    let _me = args.next(); // ignore command name
    let mut tasks = vec![];
    let count = Arc::new(Mutex::new(0u32));

    for filename in args {
        let count = count.clone();
        tasks.push(tokio::spawn(async move {
            let file = tokio::fs::File::open(filename).await?;
            let file = io::BufReader::new(file);
            let mut lines = file.lines();
            let mut local_count = 0u32;
            while let Some(_) = lines.next_line().await? {
                local_count += 1;
            }

            let mut count = count.lock().await;
            *count += local_count;
            Ok(()) as Result<(), std::io::Error>
        }));
    }

    for task in tasks {
        task.await??;
    }

    let count = count.lock().await;
    println!("Total lines: {}", *count);
    Ok(())
}

LocalSet and !Send

Thanks to @xudehseng for the inspiration on this section.

OK, did that last exercise seem a bit contrived? It was! In my opinion, the previous approach of .awaiting the counts and summing in the main function itself was superior. However, I wanted to teach you something else.

What happens if you replace the Arc<Mutex<u32>> with a Rc<RefCell<u32>>? With this code:

use tokio::prelude::*;
use tokio::io::AsyncBufReadExt;
use std::rc::Rc;
use std::cell::RefCell;

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    let mut args = std::env::args();
    let _me = args.next(); // ignore command name
    let mut tasks = vec![];
    let count = Rc::new(RefCell::new(0u32));

    for filename in args {
        let count = count.clone();
        tasks.push(tokio::spawn(async {
            let file = tokio::fs::File::open(filename).await?;
            let file = io::BufReader::new(file);
            let mut lines = file.lines();
            let mut local_count = 0u32;
            while let Some(_) = lines.next_line().await? {
                local_count += 1;
            }

            *count.borrow_mut() += local_count;
            Ok(()) as Result<(), std::io::Error>
        }));
    }

    for task in tasks {
        task.await??;
    }

    println!("Total lines: {}", count.borrow());
    Ok(())
}

You get an error:

error[E0277]: `std::rc::Rc<std::cell::RefCell<u32>>` cannot be shared between threads safely
  --> src/main.rs:15:20
   |
15 |         tasks.push(tokio::spawn(async {
   |                    ^^^^^^^^^^^^ `std::rc::Rc<std::cell::RefCell<u32>>` cannot be shared between threads safely
   |
  ::: /Users/michael/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.2/src/task/spawn.rs:49:17
   |
49 |     T: Future + Send + 'static,
   |                 ---- required by this bound in `tokio::task::spawn::spawn`

Tasks can be scheduled to multiple different threads. Therefore, your Future must be Send. And Rc<RefCell<u32>> is definitely !Send. However, in our use case, using multiple OS threads is unlikely to speed up our program; we’re going to be doing lots of blocking I/O. It would be nice if we could insist on spawning all our tasks on the same OS thread and avoid the need for Send. And sure enough, Tokio provides such a function: tokio::task::spawn_local. Using it (and adding back in async move instead of async), our program compiles, but breaks at runtime:

thread 'main' panicked at '`spawn_local` called from outside of a local::LocalSet!', src/libcore/option.rs:1190:5

Uh-oh! Now I’m personally not a big fan of this detect-it-at-runtime stuff, but the concept is simple enough: if you want to spawn onto the current thread, you need to set up your runtime to support that. And the way we do that is with LocalSet. In order to use this, you’ll need to ditch the #[tokio::main] attribute.

EXERCISE 8 Follow the documentation for LocalSet to make the program above work with Rc<RefCell<u32>>.

Conclusion

That lesson felt short. Definitely compared to the previous Tokio lesson which seemed to go on forever. I think this is a testament to how easy to use the new async/.await` syntax is.

There’s obviously a lot more that can be covered in asynchronous programming, but hopefully this establishes the largest foundations you need to understand to work with the async/.await syntax and the Tokio library itself.

If we have future lessons, I believe they’ll cover additional libraries like Hyper as they move over to Tokio 0.2, as well as specific use cases people raise. If you want something covered, mention it to me on Twitter or in the comments below.

Solutions

Solution 1

use tokio::io;

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    let mut stdin = io::stdin();
    let mut stdout = io::stdout();
    io::copy(&mut stdin, &mut stdout).await?;
    Ok(())
}

Solution 2

use tokio::process::Command;

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    for _ in 1..=10 {
        Command::new("echo").arg("Hello, world!").spawn()?.await?;
    }
    Ok(())
}

Solution 3

Since the loop will either run forever or be short circuited by an error, any code following loop will never actually be called. Therefore, code placed there will generate a warning.

Solution 4

use std::time::Duration;
use tokio::process::Command;
use tokio::{io, task, time};

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    let dating = task::spawn(dating());
    let copying = task::spawn(copying());

    dating.await??;
    copying.await??;

    Ok(())
}

async fn dating() -> Result<(), std::io::Error> {
    let mut interval = time::interval(Duration::from_secs(1));
    loop {
        interval.tick().await;
        Command::new("date").spawn()?.await?;
    }
}

async fn copying() -> Result<(), std::io::Error> {
    let mut stdin = io::stdin();
    let mut stdout = io::stdout();
    io::copy(&mut stdin, &mut stdout).await?;
    Ok(())
}

Solution 5

async fn dating() -> Result<(), std::io::Error> {
    loop {
        task::spawn_blocking(|| { std::thread::sleep(Duration::from_secs(1)) }).await?;
        Command::new("date").spawn()?.await?;
    }
}

Solution 6

The simplest tweak is to wrap the echo call with tokio::spawn:

loop {
    let (socket, _) = listener.accept().await?;
    tokio::spawn(echo(socket));
}

There is a downside to this worth noting, however: we’re ignoring the errors produced by the spawned tasks. Likely the best behavior in this case is to handle the errors inside the spawned task:

#[tokio::main]
async fn main() -> io::Result<()> {
    let mut listener = TcpListener::bind("127.0.0.1:8080").await?;

    let mut counter = 1u32;
    loop {
        let (socket, _) = listener.accept().await?;
        println!("Accepted connection #{}", counter);
        tokio::spawn(async move {
            match echo(socket).await {
                Ok(()) => println!("Connection #{} completed successfully", counter),
                Err(e) => println!("Connection #{} errored: {:?}", counter, e),
            }
        });
        counter += 1;
    }
}

Exericse 7

use tokio::io;
use tokio::spawn;
use tokio::net::TcpStream;

#[tokio::main]
async fn main() -> io::Result<()> {
    let stream = TcpStream::connect("127.0.0.1:8080").await?;
    let (mut recv, mut send) = io::split(stream);
    let mut stdin = io::stdin();
    let mut stdout = io::stdout();

    let send = spawn(async move {
        io::copy(&mut stdin, &mut send).await
    });
    let recv = spawn(async move {
        io::copy(&mut recv, &mut stdout).await
    });

    send.await??;
    recv.await??;

    Ok(())
}

Solution 8

use tokio::prelude::*;
use tokio::io::AsyncBufReadExt;
use std::rc::Rc;
use std::cell::RefCell;

fn main() -> Result<(), std::io::Error> {
    let mut rt = tokio::runtime::Runtime::new()?;
    let local = tokio::task::LocalSet::new();
    local.block_on(&mut rt, main_inner())
}

async fn main_inner() -> Result<(), std::io::Error> {
    let mut args = std::env::args();
    let _me = args.next(); // ignore command name
    let mut tasks = vec![];
    let count = Rc::new(RefCell::new(0u32));

    for filename in args {
        let count = count.clone();
        tasks.push(tokio::task::spawn_local(async move {
            let file = tokio::fs::File::open(filename).await?;
            let file = io::BufReader::new(file);
            let mut lines = file.lines();
            let mut local_count = 0u32;
            while let Some(_) = lines.next_line().await? {
                local_count += 1;
            }

            *count.borrow_mut() += local_count;
            Ok(()) as Result<(), std::io::Error>
        }));
    }

    for task in tasks {
        task.await??;
    }

    println!("Total lines: {}", count.borrow());
    Ok(())
}

December 05, 2019 12:57 PM

Shin-Cheng Mu

How to Compute Fibonacci Numbers?

In early 2019, due to a silly flamewar, some friends in Taiwan and I took an interest in computation of Fibonacci numbers. This post involving some inductive proofs and some light program derivation. If you think the fastest way to compute Fibonacci numbers is by a closed-form formula, you should read on!

Source: sciencefreak @ pixabay.

Let Nat be the type of natural numbers. We shall all be familiar with the following definition of Fibonacci number:

‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib :: Nat -> Nat
   fib 0     = 0
‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib 1     = 1
‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib (n+2) = fib (n+1) + fib n

(When defining functions on natural numbers I prefer to see 0, (+1) (and thus (+2)
= (+1) . (+1)
), as constructors that can appear on the LHS, while avoiding subtraction on the RHS. It makes some proofs more natural, and it is not hard to recover the Haskell definition anyway.)

Executing the definition without other support (such as memoization) gives you a very slow algorithm, due to lots of re-computation. I had some programming textbooks in the 80’s wrongly using this as an evidence that “recursion is slow” (fib is usually one of the only two examples in a sole chapter on recursion in such books, the other being tree traversal).

By defining fib2 n = (fib n, fib (n+1)), one can easily derive an inductive definition of fib2,

‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib2 :: Nat -> (Nat, Nat)
   fib2 0     = (0, 1)
‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib2 (n+1) = (y, x+y)
      where (x,y) = fib2 n 

which computes fib n (and fib (n+1)) in O(n) recursive calls. Be warned, however, that it does not imply that fib2 n runs in O(n) time, as we shall see soon.

Binet’s Formula

To be even faster, some might recall, do we not have a closed-form formula for Fibonacci numbers?

‍‍‍‍‍‍ ‍‍  fib n = (((1+√5)/2)^n - ((1-√5)/2)^n) /√5

It was believed that the formula was discovered by Jacques P. M. Binet in 1843, thus we call it Binet’s formula by convention, although the formula can be traced back earlier. Proving (or even discovering) the formula is a very good exercise in inductive proofs. On that I recommend this tutorial by Joe Halpern (CS 280 @ Cornell, , 2005). Having a closed-form formula gives one an impression that it give you a quick algorithm. Some even claim that it delivers a O(1) algorithm for computing Fibonacci numbers. One shall not assume, however, that ((1+√5)/2)^n and ((1-√5)/2)^n can always be computed in a snap!

When processing large numbers, we cannot assume that arithmetic operations such as addition and multiplication take constant time. In fact, it is fascinating knowing that multiplying large numbers, something that appears to be the most fundamental, is a research topic that can still see new breakthrough in 2019 [HvdH19].

Vorobev’s Equation

There is another family of algorithms that manages to compute fib n in O(log n) recursive calls. To construct such algorithms, one might start by asking oneself: can we express fib (n+k) in terms of fib n and fib k (and some other nearby fib if necessary)? Given such a formula, we can perhaps compute fib (n+n) from fib n, and design an algorithm that uses only O(log n) recursive calls.

Indeed, for n >= 1, we have

‍‍‍‍‍‍‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib (n+k) = fib (n-1) * fib k + fib n * fib (k+1) .      -- (Vor)

This property can be traced back to Nikolai. N. Vorobev, and we therefore refer to it as Vorobev’s Equation. A proof will be given later. For now, let us see how it helps us.

With Vorobev’s equation we can derive a number of (similar) algorithms that computes fib n in O(log n) recursive calls. For example, let n, k in (Vor) be n+1, n, we get

‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍   fib (2n+1) = (fib (n+1))^2 + (fib n)^2                   -- (1)

Let n, k be n+1, n+1, we get

‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍   fib (2n+2) = 2 * fib n * fib (n+1) + (fib (n+1))^2       -- (2)

Subtract (1) from (2), we get

‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍fib 2n = 2 * fib n * fib (n+1) - (fib n)^2               -- (3)


The LHS of (1) and (3) are respectively odd and even, while their RHSs involve only fib n and fib (n+1). Define fib2v n = (fib n, fib (n+1)), we can derive the program below, which uses only O(log n) recursive calls.

‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍  fib2v :: Nat -> (Nat, Nat)
  fib2v 0 = (0, 1)
  fib2v n | n `mod` 2 == 0 = (c,d)
          | otherwise      = (d, c + d)
     where (a, b) = fib2v (div n 2)
           c      = 2 * a * b - a * a
           d      = a * a + b * b

Which Runs Faster?

Having so many algorithms, the ultimate question is: which runs faster?

Interestingly, in 1988, James L. Holloway devoted an entire Master’s thesis to analysis and benchmarking of algorithms computing Fibonacci numbers. The thesis reviewed algorithms including (counterparts of) all those mentioned in this post so far, and some more algorithms based on matrix multiplication. I will summarise some of his results below.

For a theoretical analysis, we need know the number of bits needed to represent fib n. Holloway estimated that to represent fib n, we need approximately n * 0.69424 bits. We will denote this number by N n. That N n is linear in n is consistent with our impression that fib n grows exponentially in n.

Algorithm fib2 makes O(n) recursive calls, but it does not mean that the running time is O(n). Instead, fib2 n needs around N (n^2/2 - n/2) bit operations to compute. (Note that we are not talking about big-O here, but an approximated upper bound.)

What about Binet formula? We can compute √5 by Newton’s method. One can assume that each n bit division needs n^2 operations. In each round, however, we need only the most significant N n + log n bits. Overall, the number of bit operations needed to compute Binet formula is dominated by log n * (N n + log n)^2 — not faster than fib2.

Holloway studied several matrix based algorithm. Generally, they need around (N n)^2 bit operations, multiplied by different constants.

Meanwhile, algorithms based on Vorobev’s Equation perform quite well — it takes about 1/2 * (N n)^2 bit operations to compute fib2v n!

What about benchmarking? Holloway ran each algorithm up to five minutes. In one of the experiments, the program based on Binet’s formula exceeds 5 minutes when log n = 7. The program based on fib2 terminated within 5 minutes until log n = 15. In another experiment (using simpler programs considering only cases when n is a power of 2), the program based on Binet’s formula exceeds 5 minutes when log n = 13. Meanwhile the matrix-based algorithms terminated within 3 to 5 seconds, while the program based on Vorobev’s Equation terminated within around 2 seconds.

Proving Vorobev’s Equation

Finally, let us see how Vorobev’s Equation can be proved. We perform induction on n. The cases when n := 1 and 2 can be easily established. Assuming the equation holds for n (that is, (Vor)) and n:= n+1 (abbreviating fib to f):

‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f (n+1+k) = f n * f k + f (n+1) * f(k+1)      -- (Vor')

we prove the case for n:=n+2:

‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f (n+2+k)
  = { definition of f }
  ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f (n+k) + f (n+k+1)
  = { (Vor) & (Vor') }
  ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f (n-1) * f k + f n * f (k+1) +
  ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f n * f k + f (n+1) * f(k+1)
  = { f (n+1) = f n + f (n-1) }
  ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f (n+1) * f k + f n * f (k+1) + f (n+1) * f (k+1)
  = { f (n+2) = f (n+1) + f n } 
  ‍‍‍‍‍‍ ‍‍ ‍‍‍‍‍‍ ‍‍f (n+1) * f k + f (n+2) * f (k+1) .

Thus completes the proof.

Related Work

Dijkstra derived another algorithm that computes fib n in O(log n) recursive calls in EWD654 [Dij78].

Besides his master’s thesis, Holloway and his supervisor Paul Cull also published a journal version of their results [CH89]. I do not know the whereabouts of Holloway — it seems that he didn’t pursue a career in academics. I wish him all the best. It comforts me imagining that any thesis that is written with enthusiasm and love, whatever the topic, will eventually be found by some readers who are also enthusiastic about it, somewhere, sometime.

I found many interesting information on this page hosted by Ron Knott from University of Surrey, and would recommend it too.

After the flamewar, Yoda Lee (李祐棠) conducted many experiments computing Fibonacci, taking into considerations things like precision of floating point computation and choosing suitable floating point libraries. It is worth a read too. (In Chinese.)

So, what was the flamewar about? It started with someone suggesting that we should store on the moon (yes, the moon. Don’t ask me why) some important constants such as π and e and, with the constants being available in very large precision, many problems can be solved in constant time. Then people started arguing what it means computing something in constant time, whether Binet’s formula gives you a constant time algorithm… and here we are. Silly, but we learned something fun.

References

[CH89] Paul Cull, James L. Holloway. Computing fibonacci numbers quickly. Information Processing Letters, 32(3), pp 143-149. 1989.

[Dij78] Dijkstra. In honor of Fibonacci. EWD654, 1978.

[Hol88] James L. Holloway. Algorithms for Computing Fibonaci Numbers Quickly. Master Thesis, Oregon State University, 1988.

[HvdH19] David Harvey, Joris Van Der Hoeven. Integer multiplication in time O(n log n). 2019. hal-02070778.

The post How to Compute Fibonacci Numbers? appeared first on niche computing science.

by Shin at December 05, 2019 07:00 AM

December 04, 2019

Chris Penner

Advent of Optics: Day 4

Advent of Optics: Day 4

Since I'm releasing a book on practical lenses and optics later this month I thought it would be fun to do a few of this year's Advent of Code puzzles using as many obscure optics features as possible!

To be clear, the goal is to be obscure, strange and excessive towards the goal of using as many optics as possible in a given solution, even if it's awkward, silly, or just plain overkill. These are NOT idiomatic Haskell solutions, nor are they intended to be. Maybe we'll both learn something along the way. Let's have some fun!

You can find today's puzzle here.


Hey folks! Today's is a nice clean one! The goal is to find all the numbers within a given range which pass a series of predicates! The conditions each number has to match include:

  • Should be within the range; my range is 307237-769058
  • Should be six digits long; my range includes only 6 digit numbers, so we're all set here
  • Two adjacent digits in the number should be the same (e.g. '223456')
  • The digits should be in monotonically increasing order (e.g. increase or stay the same from left to right)

And that's it!

In normal Haskell we'd make a list of all possibilities, then either chain a series of filter statements or use do-notation with guards to narrow it down. Luckily, folds have filters too!

First things first, since our checks have us analyzing the actual discrete digits we'll convert our Int to a String so we can talk about them as characters:

main = ([307237..769058] :: [Int])
        & toListOf (traversed . re _Show)
        & print

>>> main
["307237","307238","307239","307240","307241","307242","307243","307244","307245",
"307246", ...]

_Show is the same prism we've used for parsing in the previous examples, but re flips it in reverse and generates a Getter which calls show! This is equivalent to to show, but will get you an extra 20 optics points...

Now let's start adding filters! We'll start by checking that the digits are all ascending. I could write some convoluted fold which does this, but the quick and dirty way is simply to sort the digits lexicographically and see if the ordering changed at all:

main :: IO ()
main = ([307237..769058] :: [Int])
        & toListOf (traversed . re _Show
                    . filtered (\s -> s == sort s)
                   )
        & print

>>> main
["333333","333334","333335","333336","333337","333338",...]

filtered removes any focuses from the fold which don't match the predicate.

We can already see this filters out a ton of possibilities. Not done yet though; we need to ensure there's at least one double consecutive digit. I'll reach for my favourite hammer: lens-regex-pcre:

main :: IO ()
main = ([307237..769058] :: [Int])
        & toListOf (traversed . re _Show
                    . filtered (\s -> s == sort s)
                    . filteredBy (packed . [regex|(\d)\1+|])
                   )


>>> main 
["333333","333334","333335","333336","333337","333338",...]

Unfortunately we don't really see much difference in the first few options, but trust me, it did something. Let's see how it works:

I'm using filteredBy here instead of filtered, filteredBy is brand new in lens >= 4.18, so make sure you've got the latest version if you want to try this out. It's like filtered, but takes a Fold instead of a predicate. filteredBy will run the fold on the current element, and will filter out any focuses for which the fold yields no results.

The fold I'm passing in converts the String to a Text using packed, then runs a regex which matches any digit, then requires at least one more of that digit to be next in the string. Since regex only yields matches, if no matches are found the candidate will be filtered out.

That's all the criteria! Now we've got a list of all of them, but all we really need is the count of them, so we'll switch from toListOf to lengthOf:

main :: IO ()
main = ([307237..769058] :: [Int])
        & lengthOf ( traversed . re _Show
                   . filtered (\s -> s == sort s)
                   . filteredBy (packed . [regex|(\d)\1+|])
                   )
        & print

>>> main
889

That's the right answer, not bad!

Part 2

Part 2 only adds one more condition:

  • The number must have a group of exactly 2 consecutive numbers, e.g. 333 is no good, but 33322 is fine.

Currently we're just checking that it has at least two consecutive numbers, but we'll need to be smarter to check for groups of exactly 2. Luckily, it's not too tricky.

The regex traversal finds ALL non-overlapping matches within a given piece of text, and the + modifier is greedy, so we know that for a given string 33322 our current pattern will find the matches: ["333", "22"]. After that it's easy enough to just check that we have at least one match of length 2!

main :: IO ()
main = ([307237..769058] :: [Int])
        & lengthOf (traversed . re _Show
                   . filtered (\s -> s == sort s)
                   . filteredBy (packed . [regex|(\d)\1+|] . match . to T.length . only 2)
                   )
        & print

>>> main
589

I just get the match text, get its length, then use only 2 to filter down to only lengths of 2. filteredBy will detect whether any of the matches make it through the whole fold and kick out any numbers that don't have a group of exactly 2 consecutive numbers.

That's it for today! Hopefully tomorrow's is just as optical! 🤞

Hopefully you learned something 🤞! Did you know I'm currently writing a book? It's all about Lenses and Optics! It takes you all the way from beginner to optics-wizard and it's currently in early access! Consider supporting it, and more posts like this one by pledging on my Patreon page! It takes quite a bit of work to put these things together, if I managed to teach your something or even just entertain you for a minute or two maybe send a few bucks my way for a coffee? Cheers! �

Become a Patron!

December 04, 2019 12:00 AM

December 03, 2019

Chris Penner

Advent of Optics: Day 3

Advent of Optics: Day 3

Since I'm releasing a book on practical lenses and optics later this month I thought it would be fun to do a few of this year's Advent of Code puzzles using as many obscure optics features as possible!

To be clear, the goal is to be obscure, strange and excessive towards the goal of using as many optics as possible in a given solution, even if it's awkward, silly, or just plain overkill. These are NOT idiomatic Haskell solutions, nor are they intended to be. Maybe we'll both learn something along the way. Let's have some fun!

You can find today's puzzle here.


Today's didn't really have any phenomenal optics insights, but I did learn about some handy types and instances for handling points in space, so we'll run through it anyways and see if we can have some fun! You know the drill by now so I'll jump right in.

Sorry, this one's a bit rushed and messy, turns out writing a blog post every day is pretty time consuming.

We've got two sets of instructions, each representing paths of wires, and we need to find out where in the space they cross, then determine the distances of those points from the origin.

We'll start as always with parsing in the input! They made it a bit harder on us this time, but it's certainly nothing that lens-regex-pcre can't handle. Before we try parsing out the individual instructions we need to split our instruction sets into one for each wire! I'll just use the lines function to split the file in two:

main :: IO ()
main = do
    TIO.readFile "./src/Y2019/day03.txt"
               <&> T.lines

I'm using that handy <&> pipelining operator, which basically allows me to pass the contents of a monadic action through a bunch of operations. It just so happens that >>= has the right precedence to tack it on the end!

Now we've got a list of two Texts, with a path in each!

Keeping a list of the two elements is fine of course, but since this is a post about optics and obfuscation, we'll pack them into a tuple just for fun:

    TIO.readFile "./src/Y2019/day03.txt"
               <&> T.lines
               <&> traverseOf both view (ix 0, ix 1)
               >>= print

>>> main
("R999,U626,R854,D200,R696,...", "D424,L846,U429,L632,U122,...")
>>> 

This is a fun (and useless) trick! If you look closely, we're actually applying traverseOf to all of its arguments! What we're doing is applying view to each traversal (i.e. ix 0), which creates a function over our list of wire texts. traverseOf then sequences the function as the effect and returns a new function: [Text] -> (Text, Text) which is pretty cool! When we pass in the list of wires this is applied and we get the tuple we want to pass forwards. We're using view on a traversal here, but it's all good because Text is a Monoid. This of course means that if the input doesn't have at least two lines of input that we'll continue on silently without any errors... but there aren't a lot of adrenaline pumping thrills in software development so I guess I'll take them where I can get them. We'll just trust that the input is good. We could use singular or even preview to be safer if we wanted, but ain't nobody got time for that in a post about crazy hacks!

Okay! Next step is to figure out the crazy path that these wires are taking. To do that we'll need to parse the paths into some sort of pseudo-useful form. I'm going to reach for lens-regex-pcre again, at least to find each instruction. We want to run this over both sides of our tuple though, so we'll add a quick incantation for that as well

import Linear

main :: IO ()
main = do
    TIO.readFile "./src/Y2019/day03.txt"
               <&> T.lines
               <&> traverseOf both view (ix 0, ix 1)
               <&> both %~
                     (   toListOf ([regex|\w\d+|] . match . unpacked . _Cons . to parseInput)

parseInput :: (Char, String) -> (Int, V2 Int)
parseInput (d, n) = (,) (read n) $ case d of
    'U' -> V2 0 (-1)
    'D' -> V2 0 1
    'L' -> V2 (-1) 0
    'R' -> V2 1 0

>>> main
([(999,V2 1 0),(626,V2 0 (-1)),...], [(854,V2 1 0),(200,V2 0 1),...]

Okay, there's a lot happening here, first I use the simple regex \w\d+ to find each "instruction", then grab the full match as Text.

Next in line I unpack it into a String since I'll need to use Read to parse the Ints.

After that I use the _Cons prism to split the string into its first char and the rest, which happens to get us the direction and the distance to travel respectively.

Then I run parseInput which converts the String into an Int with read, and converts the cardinal direction into a vector equivalent of that direction. This is going to come in handy soon I promise. I'm using V2 from the linear package for my vectors here.

Okay, so now we've parsed a list of instructions, but we need some way to determine where the wires intersect! The simplest possible way to do that is just to enumerate every single point that each wire passes through and see which ones they have in common; simple is good enough for me!

Okay here's the clever bit, the way we've organized our directions is going to come in handy, I'm going to create n copies of each vector in our stream so we effectively have a single instruction for each movement we'll make!

(toListOf ([regex|\w\d+|] . match . unpacked . _Cons . to parseInput . folding (uncurry replicate))

uncurry will make replicate into the function: replicate :: (Int, V2 Int) -> [V2 Int], and folding will run that function, then flatten out the list into the focus of the fold. Ultimately this gives us just a huge list of unit vectors like this:

[V2 0 1, V2 1 0, V2 (-1) 0...]

This is great, but we also need to keep track of which actual positions this will cause us to walk, we need to accumulate our position across the whole list. Let's use a scan:

import Control.Category ((>>>))

main :: IO ()
main = do
    TIO.readFile "./src/Y2019/day03.txt"
               <&> T.lines
               <&> traverseOf both view (ix 0, ix 1)
               <&> both %~
                     (   toListOf ([regex|\w\d+|] . match . unpacked . _Cons . to parseInput . folding (uncurry replicate))
                     >>> scanl1 (+)
                     >>> S.fromList
                     )
                     >>= print

-- Trying to print this Set crashed my computer, 
-- but here's what it looked like on the way down:
>>> main
(S.fromList [V2 2003 1486,V2 2003 1487,...], S.fromList [V2 1961 86,V2 (-433), 8873,...])

Normally I really don't like >>>, but it allows us to keep writing code top-to-bottom here, so I'll allow it just this once.

The scan uses the Num instance of V2 which adds the x and y components separately. This causes us to move in the right direction after every step, and keeps track of where we've been along the way! I dump the data into a set with S.fromList because next we're going to intersect!

main :: IO ()
main = do
    TIO.readFile "./src/Y2019/day03.txt"
               <&> T.lines
               <&> traverseOf both view (ix 0, ix 1)
               <&> both %~
                     (   toListOf ([regex|\w\d+|] . match . unpacked . _Cons . to parseInput . folding (uncurry replicate))
                     >>> scanl1 (+)
                     >>> S.fromList
                     )
               <&> foldl1Of each S.intersection
                     >>= print

-- This prints a significantly shorter list and doesn't crash my computer
>>> main
fromList [V2 (-2794) (-390),V2 (-2794) 42,...]

Okay we've jumped back out of our both block, now we need to intersect the sets in our tuple! A normal person would use uncurry S.intersection, but since this is an optics post we'll of course use the excessive version foldl1Of each S.intersection which folds over each set using intersection! A bonus is that this version won't need to change if we eventually switch to many wires stored in a tuple or list, it'll just workâ„¢.

Almost done! Now we need to find which intersection is closest to the origin. In our case the origin is just (0, 0), so we can get the distance by simply summing the absolute value of the aspects of the Vector (which is acting as a Point).

main :: IO ()
main = do
    TIO.readFile "./src/Y2019/day03.txt"
               <&> T.lines
               <&> traverseOf both view (ix 0, ix 1)
               <&> both %~
                     (   toListOf ([regex|\w\d+|] . match . unpacked . _Cons . to parseInput . folding (uncurry replicate))
                     >>> scanl1 (+)
                     >>> S.fromList
                     )
               <&> foldl1Of each S.intersection
               <&> minimumOf (folded . to (sum . abs))
                     >>= print

>>> main
Just 399

And that's my answer! Wonderful!

Part 2

Part 2 is a pretty reasonable twist, now we need to pick the intersection which is the fewest number of steps along the wire from the origin. We sum together the steps along each wire and optimize for the smallest total.

Almost all of our code stays the same, but a Set isn't going to cut it anymore, we need to know which step we were on when we reached each location! Maps are kinda like sets with extra info, so we'll switch to that instead. Instead of using S.fromList we'll use toMapOf! We need the index of each element in the list (which corresponds to it's distance from the origin along the wire). a simple zip [0..] would do it, but we'll use the much more obtuse version:

toMapOf (reindexed (+1) traversed . withIndex . swapped . ito id)

Fun right? traversed has a numerically increasing index by default, reindexed (+1) makes it start at 1 instead (since the first step still counts!). Make sure you don't forget this or you'll be confused for a few minutes before realizing your answer is off by 2...

toMapOf uses the index as the key, but in our case we actually need the vector as the key! Again, easiest would be to just use a proper M.fromList, but we won't give up so easily. We need to swap our index and our value within our lens path! We can pull the index down from it's hiding place into value-land using withIndex which adds the index to your value as a tuple, in our case: (Int, V2 Int), then we swap places using the swapped iso, and reflect the V2 Int into the index using ito:

ito :: (s -> (i, a)) -> IndexedGetter i s a

Now toMapOf properly builds a Map (V2 Int) Int!

Let's finish off part 2:

main2 :: IO ()
main2 = do
    TIO.readFile "./src/Y2019/day03.txt"
               <&> T.lines
               <&> traverseOf both view (ix 0, ix 1)
               <&> each %~
                     (   toListOf ([regex|\w\d+|] . match . unpacked . _Cons . to parseInput . folding (uncurry replicate))
                     >>> scanl1 (+)
                     >>> toMapOf (reindexed (+1) traversed . withIndex . swapped . ito id)
                     )
               <&> foldl1Of each (M.intersectionWith (+))
               <&> minimum
               >>= print

We use M.intersectionWith (+) now so we add the distances when we hit an intersection, so our resulting Map has the sum of the two wires' distances at each intersection.

Now we just get the minimum distance and print it! All done!

This one wasn't so "opticsy", but hopefully tomorrow's puzzle will fit a bit better! Cheers!

Hopefully you learned something 🤞! Did you know I'm currently writing a book? It's all about Lenses and Optics! It takes you all the way from beginner to optics-wizard and it's currently in early access! Consider supporting it, and more posts like this one by pledging on my Patreon page! It takes quite a bit of work to put these things together, if I managed to teach your something or even just entertain you for a minute or two maybe send a few bucks my way for a coffee? Cheers! �

Become a Patron!

December 03, 2019 12:00 AM

December 02, 2019

Shin-Cheng Mu

Deriving Monadic Programs

Around 2016-17, my colleagues in Academia Sinica invited me to join their project reasoning about Spark, a platform for distributive computation. The aim was to build a Haskell model of Spark and answer questions like “under what conditions is this Spark aggregation deterministic?” Being a distributive computation model, Spark is intrinsically non-deterministic. To properly model non-determinism, I thought, I had to use monads.

Monad Only. By Bradley Gordon. CC-by 2.0.

That was how I started to take an interest in reasoning and derivation of monadic programs. Several years having passed, I collaborated with many nice people, managed to get some results published, failed to publish some stuffs I personally like, and am still working on some interesting tiny problems. This post summaries what was done, and what remains to be done.

Non-determinism

Priori to that, all program reasoning I have done was restricted to pure programs. They are beautiful mathematical expressions suitable for equational reasoning, while effectful programs are the awkward squad not worthy of rigorous treatment — so I thought, and I could not have been more wrong! It turned out that there are plenty of fun reasoning one can do with monadic programs. The rule of the game is that you do not know how the monad you are working with is implemented, thus you rely only on the monad laws:

      return >>= f  =  f
      m >>= return  =  m
   (m >>= f) >>= g  =  m >>= (\x -> f x >>= g)

and the laws of the effect operators. For non-determinism monad we usually assume two operators: 0 for failure, and (|) for non-deterministic choice (usually denoted by mzero and mplus of the type class MonadPlus). It is usually assumed that (|) is associative with 0 as its identity element, and they interact with (>>=) by the following laws:

                  0 >>= f  =  0                         (left-zero)
          (m1 | m2) >>= f  =  (m1 >>= f) | (m2 >>= f)   (left-distr.)
                  m >>= 0  =  0                         (right-zero)
m >>= (\x -> f1 x | f2 x)  =  (m >>= f1) | (m >>= f2)   (right-distr.)

The four laws are respectively named left-zero, left-distributivity, right-zero, and right-distributivity, about which we will discuss more later. These laws are sufficient for proving quite a lot of interesting properties about non-deterministic monad, as well as properties of Spark programs. I find it very fascinating.

Unfortunately, it turns out that monads were too heavy a machinery for the target readers of the Spark paper. The version we eventually published in NETYS 2017 [CHLM17] consists of pure-looking functional programs that occasionally uses “non-deterministic functions” in an informal, but probably more accessible way. Ondřej Lengál should be given credit for most, if not all of the proofs. My proofs using non-deterministic monad was instead collected in a tech. report [Mu19a]. (Why a tech. report? We will come to this later.)

State and Non-determinism

Certainly, it would be more fun if, besides non-determinism, more effects are involved. I have also been asking myself: rather than proving properties of given programs, can I derive monadic programs? For example, is it possible to start from a non-deterministic specification, and derive a program solving the problem using states?

The most obvious class of problems that involve both non-determinism and state are backtracking programs. Thus I tried to tackle a problem previously dealt with by Jeremy Gibbons and Ralf Hinze [GH11], the n-Queens problem — placing n queens on a n by n chess board in such a way that no queen can attack another. The specification non-deterministically generates all chess arrangements, before filtering out safe ones. We wish to derive a backtracking program that remembers the currently occupied diagonals in a state monad.

Jeremy Gibbons suggested to generalise the problem a bit: given a problem specification in terms of a non-deterministic scanl, is it possible to transform it to a non-deterministic and stateful foldr?

Assuming all the previous laws and, in addition, laws about get and put of state monad (the same as those assumed by Gibbons and Hinze [GH11], omitted here), I managed to come up with some general theorems for such transformations.

The interaction between non-determinism and state turned out to be intricate. Recall the right-zero and right-distributivity laws:

                  m >>= 0  =  0                        (right-zero)
m >>= (\x -> f1 x | f2 x)  =  (m >>= f1) | (m >>= f2)  (right-distr.)

While they do not explicit mention state at all, with the presence of state, these two laws imply that each non-deterministic branch has its own copy of the state. In the right-zero law, if a computation fails, it just fails — all state modifications in m are forgotten. In right-distributivity, the two m on the RHS each operates on their local copy of the state, thus locally it appears that the side effects in m happen only once.

We call a non-deterministic state monad satisfying these laws a local state monad. A typical example is M a = S -> List (a, S) where S is the type of the state — modulo order and repetition in the list monad, that is. The same monad can be constructed by StateT s (ListT Identity) in the Monad Transformer Library. With effect handling [KI15], we get the desired monad if we run the handler for state before that for list.

The local state monad is the ideal combination of non-determinism and state we would like to have. It has nice properties, and is much more manageable. However, there are practical reasons where one may want a state to be shared globally. For example, when the state is a large array that is costly to copy. Typically one uses operations to explicit “roll back” the global state to its previous configuration upon the end of each non-deterministic branch.

Can we reason about programs that use a global state?

Global State

The non-determinism monad with a global state turns out to be a weird beast to tame.

While we are concerned with what laws a monad satisfy, rather than how it is implemented, we digress a little and consider how to implement a global state monad, just to see the issues involved. By intuition one might guess M a = S -> (List a, S), but that is not even a monad — the direct but naive implementation of its (>>=) does not meet the monad laws! The type ListT (State s) generated using the Monad Transformer Library expands to essentially the same implementation, and is flawed in the same way (but the authors of MTL do not seem to bother fixing it). For correct implementations, see discussions on the Haskell wiki. With effect handling [KI15], we do get a monad by running the handler for list before that for state.

Assuming that we do have a correct implementation of a global state monad. What can we say about the it? We do not have right-zero and right-distributivity laws anymore, but left-zero and left-distributivity still hold. For now we assume an informal, intuitive understanding of the semantics: a global state is shared among non-deterministic branches, which are executed left-to-right. We will need more laws to, for example, formally specify what we mean by “the state is shared”. This will turn out to be tricky, so we postpone that for illustrative purpose.

In backtracking algorithms that keep a global state, it is a common pattern to

  1. update the current state to its next step,
  2. recursively search for solutions, and
  3. roll back the state to the previous step.

To implement such pattern as a monadic program, one might come up with something like the code below:

  modify next >> search >>= modReturn prev

where next advances the state, prev undoes the modification of next, and modify and modReturn are defined by:

modify f       = get >>= (put . f)
modReturn f v  = modify f >> return v

Let the initial state be st and assume that search found three choices m1 | m2 | m3. The intention is that m1, m2, and m3 all start running with state next st, and the state is restored to prev (next st) = st afterwards. By left-distributivity, however,

 modify next >> (m1 | m2 | m3) >>= modReturn prev =
   modify next >> (  (m1 >>= modReturn prev) |
                     (m2 >>= modReturn prev) |
                     (m3 >>= modReturn prev))

which, with a global state, means that m2 starts with state st, after which the state is rolled back too early to prev st. The computation m3 starts with prev st, after which the state is rolled too far to prev (prev st).

Nondeterministic Choice as Sequencing

We need a way to say that “modify next and modReturn prev are run exactly once, respectively before and after all non-deterministic branches in solve.” Fortunately, we have discovered a curious technique. Since non-deterministic branches are executed sequentially, the program

 (modify next >> 0) | m1 | m2 | m3 | (modify prev >> 0)

executes modify next and modify prev once, respectively before and after all the non-deterministic branches, even if they fail. Note that modify next >> 0 does not generate a result. Its presence is merely for the side-effect of modify next.

The reader might wonder: now that we are using (|) as a sequencing operator, does it simply coincide with (>>)? Recall that we still have left-distributivity and, therefore, (m1 | m2) >> n equals (m1 >> n) | (m2 >> n). That is, (|) acts as “insertion points”, where future code followed by (>>) can be inserted into! This is certainly a dangerous feature, whose undisciplined use can lead to chaos.

To be slightly disciplined, we can go a bit further by defining the following variations of put, which restores the original state when it is backtracked over:

putR s = get >>= (\s0 -> put s | (put s0 >> 0))

To see how it works, assuming that some computation comp follows putR s. By left-distributivity we get:

   putR s >> comp
=  (get >>= (\s0 -> put s | (put s0 >> 0))) >> comp
=    { monad laws, left dist., left zero }
   get >>= (\s0 -> put s >> comp |
                   (put s0 >> 0))

Therefore, comp runs with new state s. After it finishes, the current state s0 is restored.

The hope is that, by replacing all put with putR, we can program as if we are working with local states, while there is actually a shared global state.

(I later learned that Tom Schrijvers had developed similar and more complete techniques, in the context of simulating Prolog boxes in Haskell.)

Handling Local State with Global State

So was the idea. I had to find out what laws are sufficient to formally specify the behaviour of a global state monad (note that the discussion above has been informal), and make sure that there exists a model/implementation satisfying these laws.

I prepared a draft paper containing proofs about Spark functions using non-determinism monad, a derivation of backtracking algorithms solving problems including n-Queens using a local state monad and, after proposing laws a global state monad should satisfy, derived another backtracking algorithm using a shared global state. I submitted the draft and also sent the draft to some friends for comments. Very soon, Tom Schrijvers wrote back and warned me: the laws I proposed for the global state monad could not be true!

I quickly withdrew the draft, and invited Tom Schrijvers to collaborate and fix the issues. Together with Koen Pauwels, they carefully figured out what the laws should be, showed that the laws are sufficient to guarantee that one can simulate local states using a global state (in the context of effect handling), that there exists a model/implementation of the monad, and verified key theorems in Coq. That resulted in a paper Handling local state with global state, which we published in MPC 2019.

The paper is about semantical concerns of the local/global state interaction. I am grateful to Koen and Tom, who deserve credits for most of the hard work — without their help the paper could not have been done. The backtracking algorithm, meanwhile, became a motivating example that was briefly mentioned.

Tech. Reports

I was still holding out hope that my derivations could be published in a conference or journal, until I noticed, by chance, a submission to MPC 2019 by Affeldt et al [ANS19]. They formalised a hierarchy of monadic effects in Coq and, for demonstration, needed examples of equational reasoning about monadic programs. They somehow found the draft that was previously withdrawn, and corrected some of its errors. I am still not sure how that happened — I might have put the draft on my web server to communicate with my students, and somehow it showed up on the search engine. The file name was test.pdf. And that was how the draft was cited!

“Oh my god,” I thought in horror, “please do not cite an unfinished work of mine, especially when it is called test.pdf!”

I quickly wrote to the authors, thanked them for noticing the draft and finding errors in it, and said that I will turn them to tech. reports, which they can cite more properly. That resulted in two tech. reports: Equational reasoning for non-determinism monad: the case of Spark aggregation [Mu19a] contains my proofs of Spark programs, and Calculating a backtracking algorithm: an exercise in monadic program derivation [Mu19b] the derivation of backtracking algorithms.

Pointwise Relational Program Calculation

There are plenty of potentially interesting topics one can do with monadic program derivation. For one, people have been suggesting pointwise notations for relational program calculation (e.g. de Moor and Gibbons [dMG00], Bird and Rabe [RB19]). I believe that monads offer a good alternative. Plenty of relational program calculation can be carried out in terms of non-determinism monad. Program refinement can be defined by

m1 ⊆ m2  ≡  m1 | m2 = m2

This definition applies to monads having other effects too. I have a draft demonstrating the idea with quicksort. Sorting is specified by a non-determinism monad returning a permutation of the input that is sorted — when the ordering is not anti-symmetric, there can be more than one ways to sort a list, therefore the specification is non-deterministic. From the specification, one can derive pure quicksort on lists, as well as quicksort that mutates an array. Let us hope I have better luck publishing it this time.

With Kleisli composition, there is even a natural definition of factors. Lifting (⊆) to functions (that is f ⊆ g ≡ (∀ x : f x ⊆ g x)), and recall that (f >=> g) x = f x >>= g, the left factor (\) can be specified by the Galois connection:

(f >=> g) ⊆ h  ≡  g ⊆ (f \ h)

That is, f \ h is the most non-deterministic (least constrained) monadic program that, when ran after the postcondition set up by f, still meets the result specified by h.

If, in addition, we have a proper notion of converses, I believe that plenty of optimisation problems can be specified and solved using calculation rules of factors and converses. I believe these are worth exploring.

References

[ANS19] Reynald Affeldt, David Nowak and Takafumi Saikawa. A hierarchy of monadic effects for program verification using equational reasoning. In Mathematics of Program Construction (MPC), Graham Hutton, editor, pp. 226-254. Springer, 2019.

[BR19] Richard Bird, Florian Rabe. How to calculate with nondeterministic functions. In Mathematics of Program Construction (MPC), Graham Hutton, editor, pp. 138-154. Springer, 2019.

[CHLM17] Yu-Fang Chen, Chih-Duo Hong, Ondřej Lengál, Shin-Cheng Mu, Nishant Sinha, and Bow-Yaw Wang. An executable sequential specification for Spark aggregation. In Networked Systems (NETYS), pp. 421-438. 2017.

[GH11] Jeremy Gibbons, Ralf Hinze. Just do it: simple monadic equational reasoning. In International Conference on Functional Programming (ICFP), pp 2-14, 2011.

[KI15] Oleg Kiselyov, Hiromi Ishii. Freer monads, more extensible effects. In Symposium on Haskell, pp 94-105, 2015.

[dMG00] Oege de Moor, Jeremy Gibbons. Pointwise relational programming. In Rus, T. (ed.) Algebraic Methodology and Software Technology. pp. 371–390, Springer, 2000.

[Mu19a] Shin-Cheng Mu. Equational reasoning for non-determinism monad: the case of Spark aggregation. Tech. Report TR-IIS-19-002, Institute of Information Science, Academia Sinica, June 2019.

[Mu19b] Shin-Cheng Mu. Calculating a backtracking algorithm: an exercise in monadic program derivation. Tech. Report TR-IIS-19-003, Institute of Information Science, Academia Sinica, June 2019.

[PSM19] Koen Pauwels, Tom Schrijvers and Shin-Cheng Mu. Handling local state with global state. In Mathematics of Program Construction (MPC), Graham Hutton, editor, pp. 18-44. Springer, 2019.

The post Deriving Monadic Programs appeared first on niche computing science.

by Shin at December 02, 2019 01:53 PM

Michael Snoyman

Down and dirty with Future - Rust Crash Course lesson 8

It’s about a year since I wrote the last installment in the Rust Crash Course series. That last post was a doozy, diving into async, futures, and tokio. All in one post. That was a bit sadistic, and I’m a bit proud of myself on that front.

Much has happened since then, however. Importantly: the Future trait has moved into the standard library itself and absorbed a few modifications. And then to tie that up in a nicer bow, there’s a new async/.await syntax. It’s hard for me to overstate just how big a quality of life difference this is when writing asynchronous code in Rust.

I recently wrote an article on the FP Complete tech site that demonstrates the Future and async/.await stuff in practice. But here, I want to give a more thorough analysis of what’s going on under the surface. Unlike lesson 7, I’m going to skip the motivation for why we want to write asynchronous code, and break this up into more digestible chunks. Like lesson 7, I’m going to include the exercise solutions inline, instead of a separate post.

NOTE I’m going to use the async-std library in this example instead of tokio. My only real reason for this is that I started using async-std before tokio released support for the new async/.await syntax. I’m not ready to weigh in on, in general, which of the libraries I prefer.

You should start a Cargo project to play along. Try cargo new --bin sleepus-interruptus. If you want to ensure you’re on the same compiler version, add a rust-toolchain file with the string 1.39.0 in it. Run cargo run to make sure you’re all good to go.

This post is part of a series based on teaching Rust at FP Complete. If you’re reading this post outside of the blog, you can find links to all posts in the series at the top of the introduction post. You can also subscribe to the RSS feed.

Sleepus Interruptus

I want to write a program which will print the message Sleepus 10 times, with a delay of 0.5 seconds. And it should print the message Interruptus 5 times, with a delay of 1 second. This is some fairly easy Rust code:

use std::thread::{sleep};
use std::time::Duration;

fn sleepus() {
    for i in 1..=10 {
        println!("Sleepus {}", i);
        sleep(Duration::from_millis(500));
    }
}

fn interruptus() {
    for i in 1..=5 {
        println!("Interruptus {}", i);
        sleep(Duration::from_millis(1000));
    }
}

fn main() {
    sleepus();
    interruptus();
}

However, as my clever naming implies, this isn’t my real goal. This program runs the two operations synchronously, first printing Sleepus, then Interruptus. Instead, we would want to have these two sets of statements printed in an interleaved way. That way, the interruptus actually does some interrupting.

EXERCISE Use the std::thread::spawn function to spawn an operating system thread to make these printed statements interleave.

There are two basic approaches to this. One—maybe the more obvious—is to spawn a separate thread for each function, and then wait for each of them to complete:

use std::thread::{sleep, spawn};

fn main() {
    let sleepus = spawn(sleepus);
    let interruptus = spawn(interruptus);

    sleepus.join().unwrap();
    interruptus.join().unwrap();
}

Two things to notice:

  • We call spawn with spawn(sleepus), not spawn(sleepus()). The former passes in the function sleepus to spawn to be run. The latter would immediately run sleepus() and pass its result to spawn, which is not what we want.
  • I use join() in the main function/thread to wait for the child thread to end. And I use unwrap to deal with any errors that may occur, because I’m being lazy.

Another approach would be to spawn one helper thread instead, and call one of the functions in the main thread:

fn main() {
    let sleepus = spawn(sleepus);
    interruptus();

    sleepus.join().unwrap();
}

This is more efficient (less time spawning threads and less memory used for holding them), and doesn’t really have a downside. I’d recommend going this way.

QUESTION What would be the behavior of this program if we didn’t call join in the two-spawn version? What if we didn’t call join in the one-spawn version?

But this isn’t an asynchronous approach to the problem at all! We have two threads being handled by the operating system which are both acting synchronously and making blocking calls to sleep. Let’s build up a bit of intuition towards how we could have our two tasks (printing Sleepus and printing Interruptus) behave more cooperatively in a single thread.

Introducing async

We’re going to start at the highest level of abstraction, and work our way down to understand the details. Let’s rewrite our application in an async style. Add the following to your Cargo.toml:

async-std = { version = "1.2.0", features = ["attributes"] }

And now we can rewrite our application as:

use async_std::task::{sleep, spawn};
use std::time::Duration;

async fn sleepus() {
    for i in 1..=10 {
        println!("Sleepus {}", i);
        sleep(Duration::from_millis(500)).await;
    }
}

async fn interruptus() {
    for i in 1..=5 {
        println!("Interruptus {}", i);
        sleep(Duration::from_millis(1000)).await;
    }
}

#[async_std::main]
async fn main() {
    let sleepus = spawn(sleepus());
    interruptus().await;

    sleepus.await;
}

Let’s hit the changes from top to bottom:

  • Instead of getting sleep and spawn from std::thread, we’re getting them from async_std::task. That probably makes sense.
  • Both sleepus and interruptus now say async in front of fn.
  • After the calls to sleep, we have a .await. Note that this is not a .await() method call, but instead a new syntax.
  • We have a new attribute #[async_std::main] on the main function.
  • The main function also has async before fn.
  • Instead of spawn(sleepus), passing in the function itself, we’re now calling spawn(sleepus()), immediately running the function and passing its result to spawn.
  • The call to interruptus() is now followed by .await.
  • Instead of join()ing on the sleepus JoinHandle, we use the .await syntax.

EXERCISE Run this code on your own machine and make sure everything compiles and runs as expected. Then try undoing some of the changes listed above and see what generates a compiler error, and what generates incorrect runtime behavior.

That may look like a large list of changes. But in reality, our code is almost identical structural to the previous version, which is a real testament to the async/.await syntax. And now everything works under the surface the way we want: a single operating system thread making non-blocking calls.

Let’s analyze what each of these changes actually means.

async functions

Adding async to the beginning of a function definition does three things:

  1. It allows you to use .await syntax inside. We’ll get to the meaning of that in a bit.
  2. It modified the return type of the function. async fn foo() -> Bar actually returns impl std::future::Future<Output=Bar>.
  3. Automatically wraps up the result value in a new Future. We’ll demonstrate that better later.

Let’s unpack that second point a bit. There’s a trait called Future defined in the standard library. It has an associated type Output. What this trait means is: I promise that, when I complete, I will give you a value of type Output. You could imagine, for instance, an asynchronous HTTP client that looks something like:

impl HttpRequest {
    fn perform(self) -> impl Future<Output=HttpResponse> { ... }
}

There will be some non-blocking I/O that needs to occur to make that request. We don’t want to block the calling thread while those things happen. But we do want to somehow eventually get the resulting response.

We’ll play around with Future values more directly later. For now, we’ll continue sticking with the high-level async/.await syntax.

EXERCISE Rewrite the signature of sleepus to not use the async keyword by modifying its result type. Note that the code will not compile when you get the type right. Pay attention to the error message you get.

The result type of async fn sleepus() is the implied unit value (). Therefore, the Output of our Future should be unit. This means we need to write our signature as:

fn sleepus() -> impl std::future::Future<Output=()>

However, with only that change in place, we get the following error messages:

error[E0728]: `await` is only allowed inside `async` functions and blocks
 --> src/main.rs:7:9
  |
4 | fn sleepus() -> impl std::future::Future<Output=()> {
  |    ------- this is not `async`
...
7 |         sleep(Duration::from_millis(500)).await;
  |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only allowed inside `async` functions and blocks

error[E0277]: the trait bound `(): std::future::Future` is not satisfied
 --> src/main.rs:4:17
  |
4 | fn sleepus() -> impl std::future::Future<Output=()> {
  |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `std::future::Future` is not implemented for `()`
  |
  = note: the return type of a function must have a statically known size

The first message is pretty direct: you can only use the .await syntax inside an async function or block. We haven’t seen an async block yet, but it’s exactly what it sounds like:

async {
    // async noises intensify
}

The second error message is a result of the first: the async keyword causes the return type to be an impl Future. Without that keyword, our for loop evaluates to (), which isn’t an impl Future.

EXERCISE Fix the compiler errors by introducing an async block inside the sleepus function. Do not add async to the function signature, keep using impl Future.

Wrapping the entire function body with an async block solves the problem:

fn sleepus() -> impl std::future::Future<Output=()> {
    async {
        for i in 1..=10 {
            println!("Sleepus {}", i);
            sleep(Duration::from_millis(500)).await;
        }
    }
}

.await a minute

Maybe we don’t need all this async/.await garbage though. What if we remove the calls to .await usage in sleepus? Perhaps surprisingly, it compiles, though it does give us an ominous warning:

warning: unused implementer of `std::future::Future` that must be used
 --> src/main.rs:8:13
  |
8 |             sleep(Duration::from_millis(500));
  |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |
  = note: `#[warn(unused_must_use)]` on by default
  = note: futures do nothing unless you `.await` or poll them

We’re generating a Future value but not using it. And sure enough, if you look at the output of our program, you can see what the compiler means:

Interruptus 1
Sleepus 1
Sleepus 2
Sleepus 3
Sleepus 4
Sleepus 5
Sleepus 6
Sleepus 7
Sleepus 8
Sleepus 9
Sleepus 10
Interruptus 2
Interruptus 3
Interruptus 4
Interruptus 5

All of our Sleepus messages print without delay. Intriguing! The issue is that the call to sleep no longer actually puts our current thread to sleep. Instead, it generates a value which implements Future. And when that promise is eventually fulfilled, we know that the delay has occurred. But in our case, we’re simply ignoring the Future, and therefore never actually delaying.

To understand what the .await syntax is doing, we’re going to implement our function with much more direct usage of the Future values. Let’s start by getting rid of the async block.

Dropping async block

If we drop the async block, we end up with this code:

fn sleepus() -> impl std::future::Future<Output=()> {
    for i in 1..=10 {
        println!("Sleepus {}", i);
        sleep(Duration::from_millis(500));
    }
}

This gives us an error message we saw before:

error[E0277]: the trait bound `(): std::future::Future` is not satisfied
 --> src/main.rs:4:17
  |
4 | fn sleepus() -> impl std::future::Future<Output=()> {
  |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `std::future::Future` is not implemented for `()`
  |

This makes sense: the for loop evaluates to (), and unit does not implement Future. One way to fix this is to add an expression after the for loop that evaluates to something that implements Future. And we already know one such thing: sleep.

EXERCISE Tweak the sleepus function so that it compiles.

One implementation is:

fn sleepus() -> impl std::future::Future<Output=()> {
    for i in 1..=10 {
        println!("Sleepus {}", i);
        sleep(Duration::from_millis(500));
    }
    sleep(Duration::from_millis(0))
}

We still get a warning about the unused Future value inside the for loop, but not the one afterwards: that one is getting returned from the function. But of course, sleeping for 0 milliseconds is just a wordy way to do nothing. It would be nice if there was a “dummy” Future that more explicitly did nothing. And fortunately, there is.

EXERCISE Replace the sleep call after the for loop with a call to ready.

fn sleepus() -> impl std::future::Future<Output=()> {
    for i in 1..=10 {
        println!("Sleepus {}", i);
        sleep(Duration::from_millis(500));
    }
    async_std::future::ready(())
}

Implement our own Future

To unpeel this onion a bit more, let’s make our life harder, and not use the ready function. Instead, we’re going to define our own struct which implements Future. I’m going to call it DoNothing.

use std::future::Future;

struct DoNothing;

fn sleepus() -> impl Future<Output=()> {
    for i in 1..=10 {
        println!("Sleepus {}", i);
        sleep(Duration::from_millis(500));
    }
    DoNothing
}

EXERCISE This code won’t compile. Without looking below or asking the compiler, what do you think it’s going to complain about?

The problem here is that DoNothing does not provide a Future implementation. We’re going to do some Compiler Driven Development and let rustc tell us how to fix our program. Our first error message is:

the trait bound `DoNothing: std::future::Future` is not satisfied

So let’s add in a trait implementation:

impl Future for DoNothing {
}

Which fails with:

error[E0046]: not all trait items implemented, missing: `Output`, `poll`
 --> src/main.rs:7:1
  |
7 | impl Future for DoNothing {
  | ^^^^^^^^^^^^^^^^^^^^^^^^^ missing `Output`, `poll` in implementation
  |
  = note: `Output` from trait: `type Output;`
  = note: `poll` from trait: `fn(std::pin::Pin<&mut Self>, &mut std::task::Context<'_>) -> std::task::Poll<<Self as std::future::Future>::Output>`

We don’t really know about the Pin<&mut Self> or Context thing yet, but we do know about Output. And since we were previously returning a () from our ready call, let’s do the same thing here.

use std::pin::Pin;
use std::task::{Context, Poll};

impl Future for DoNothing {
    type Output = ();

    fn poll(self: Pin<&mut Self>, ctx: &mut Context) -> Poll<Self::Output> {
        unimplemented!()
    }
}

Woohoo, that compiles! Of course, it fails at runtime due to the unimplemented!() call:

thread 'async-std/executor' panicked at 'not yet implemented', src/main.rs:13:9

Now let’s try to implement poll. We need to return a value of type Poll<Self::Output>, or Poll<()>. Let’s look at the definition of Poll:

pub enum Poll<T> {
    Ready(T),
    Pending,
}

Using some basic deduction, we can see that Ready means “our Future is complete, and here’s the output” while Pending means “it’s not done yet.” Given that our DoNothing wants to return the output of () immediately, we can just use the Ready variant here.

EXERCISE Implement a working version of poll.

fn poll(self: Pin<&mut Self>, _ctx: &mut Context) -> Poll<Self::Output> {
    Poll::Ready(())
}

Congratulations, you’ve just implemented your first Future struct!

The third async difference

Remember above we said that making a function async does a third thing:

Automatically wraps up the result value in a new Future. We’ll demonstrate that better later.

Now is later. Let’s demonstrate that better.

Let’s simplify the definition of sleepus to:

fn sleepus() -> impl Future<Output=()> {
    DoNothing
}

The compiles and runs just fine. Let’s try switching back to the async way of writing the signature:

async fn sleepus() {
    DoNothing
}

This now gives us an error:

error[E0271]: type mismatch resolving `<impl std::future::Future as std::future::Future>::Output == ()`
  --> src/main.rs:17:20
   |
17 | async fn sleepus() {
   |                    ^ expected struct `DoNothing`, found ()
   |
   = note: expected type `DoNothing`
              found type `()`

You see, when you have an async function or block, the result is automatically wrapped up in a Future. So instead of returning a DoNothing, we’re returning a impl Future<Output=DoNothing>. And our type wants Output=().

EXERCISE Try to guess what you need to add to this function to make it compile.

Working around this is pretty easy: you simply append .await to DoNothing:

async fn sleepus() {
    DoNothing.await
}

This gives us a little more intuition for what .await is doing: it’s extracting the () Output from the DoNothing Future… somehow. However, we still don’t really know how it’s achieving that. Let’s build up a more complicated Future to get closer.

SleepPrint

We’re going to build a new Future implementation which:

  • Sleeps for a certain amount of time
  • Then prints a message

This is going to involve using pinned pointers. I’m not going to describe those here. The specifics of what’s happening with the pinning isn’t terribly enlightening to the topic of Futures. If you want to let your eyes glaze over at that part of the code, you won’t be missing much.

Our implementation strategy for SleepPrint will be to wrap an existing sleep Future with our own implementation of Future. Since we don’t know the exact type of the result of a sleep call (it’s just an impl Future), we’ll use a parameter:

struct SleepPrint<Fut> {
    sleep: Fut,
}

And we can call this in our sleepus function with:

fn sleepus() -> impl Future<Output=()> {
    SleepPrint {
        sleep: sleep(Duration::from_millis(3000)),
    }
}

Of course, we now get a compiler error about a missing Future implementation. So let’s work on that. Our impl starts with:

impl<Fut: Future<Output=()>> Future for SleepPrint<Fut> {
    ...
}

This says that SleepPrint is a Future if the sleep value it contains is a Future with an Output of type (). Which, of course, is true in the case of the sleep function, so we’re good. We need to define Output:

type Output = ();

And then we need a poll function:

fn poll(self: Pin<&mut Self>, ctx: &mut Context) -> Poll<Self::Output> {
    ...
}

The next bit is the eyes-glazing part around pinned pointers. We need to project the Pin<&mut Self> into a Pin<&mut Fut> so that we can work on the underlying sleep Future. We could use a helper crate to make this a bit prettier, but we’ll just do some unsafe mapping:

let sleep: Pin<&mut Fut> = unsafe { self.map_unchecked_mut(|s| &mut s.sleep) };

Alright, now the important bit. We’ve got our underlying Future, and we need to do something with it. The only thing we can do with it is call poll. poll requires a &mut Context, which fortunately we’ve been provided. That Context contains information about the currently running task, so it can be woken up (via a Waker) when the task is ready.

NOTE We’re not going to get deeper into how Waker works in this post. If you want a real life example of how to call Waker yourself, I recommend reading my pid1 in Rust post.

For now, let’s do the only thing we can reasonably do:

match sleep.poll(ctx) {
    ...
}

We’ve got two possibilities. If poll returns a Pending, it means that the sleep hasn’t completed yet. In that case, we want our Future to also indicate that it’s not done. To make that work, we just propagate the Pending value:

Poll::Pending => Poll::Pending,

However, if the sleep is already complete, we’ll receive a Ready(()) variant. In that case, it’s finally time to print our message and then propagate the Ready:

Poll::Ready(()) => {
    println!("Inside SleepPrint");
    Poll::Ready(())
},

And just like that, we’ve built a more complex Future from a simpler one. But that was pretty ad-hoc.

TwoFutures

SleepPrint is pretty ad-hoc: it hard codes a specific action to run after the sleep Future completes. Let’s up our game, and sequence the actions of two different Futures. We’re going to define a new struct that has three fields:

  • The first Future to run
  • The second Future to run
  • A bool to tell us if we’ve finished running the first Future

Since the Pin stuff is going to get a bit more complicated, it’s time to reach for that helper crate to ease our implementation and avoid unsafe blocks ourself. So add the following to your Cargo.toml:

pin-project-lite = "0.1.1"

And now we can define a TwoFutures struct that allows us to project the first and second Futures into pinned pointers:

use pin_project_lite::pin_project;

pin_project! {
    struct TwoFutures<Fut1, Fut2> {
        first_done: bool,
        #[pin]
        first: Fut1,
        #[pin]
        second: Fut2,
    }
}

Using this in sleepus is easy enough:

fn sleepus() -> impl Future<Output=()> {
    TwoFutures {
        first_done: false,
        first: sleep(Duration::from_millis(3000)),
        second: async { println!("Hello TwoFutures"); },
    }
}

Now we just need to define our Future implementation. Easy, right? We want to make sure both Fut1 and Fut2 are Futures. And our Output will be the output from the Fut2. (You could also return both the first and second output if you wanted.) To make all that work:

impl<Fut1: Future, Fut2: Future> Future for TwoFutures<Fut1, Fut2> {
    type Output = Fut2::Output;

    fn poll(self: Pin<&mut Self>, ctx: &mut Context) -> Poll<Self::Output> {
        ...
    }
}

In order to work with the pinned pointer, we’re going to get a new value, this, which projects all of the pointers:

let this = self.project();

With that out of the way, we can interact with our three fields directly in this. The first thing we do is check if the first Future has already completed. If not, we’re going to poll it. If the poll is Ready, then we’ll ignore the output and indicate that the first Future is done:

if !*this.first_done {
    if let Poll::Ready(_) = this.first.poll(ctx) {
        *this.first_done = true;
    }
}

Next, if the first Future is done, we want to poll the second. And if the first Future is not done, then we say that we’re pending:

if *this.first_done {
    this.second.poll(ctx)
} else {
    Poll::Pending
}

And just like that, we’ve composed two Futures together into a bigger, grander, brighter Future.

EXERCISE Get rid of the usage of an async block in second. Let the compiler errors guide you.

The error message you get says that () is not a Future. Instead, you need to return a Future value after the call to println!. We can use our handy async_std::future::ready:

second: {
    println!("Hello TwoFutures");
    async_std::future::ready(())
},

AndThen

Sticking together two arbitrary Futures like this is nice. But it’s even nicer to have the second Futures depend on the result of the first Future. To do this, we’d want a function like and_then. (Monads FTW to my Haskell buddies.) I’m not going to bore you with the gory details of an implementation here, but feel free to read the Gist if you’re interested. Assuming you have this method available, we can begin to write the sleepus function ourselves as:

fn sleepus() -> impl Future<Output = ()> {
    println!("Sleepus 1");
    sleep(Duration::from_millis(500)).and_then(|()| {
        println!("Sleepus 2");
        sleep(Duration::from_millis(500)).and_then(|()| {
            println!("Sleepus 3");
            sleep(Duration::from_millis(500)).and_then(|()| {
                println!("Sleepus 4");
                async_std::future::ready(())
            })
        })
    })
}

And before Rust 1.39 and the async/.await syntax, this is basically how async code worked. This is far from perfect. Besides the obvious right-stepping of the code, it’s not actually a loop. You could recursively call sleepus, except that creates an infinite type which the compiler isn’t too fond of.

But fortunately, we’ve now finally established enough background to easily explain what the .await syntax is doing: exactly what and_then is doing, but without the fuss!

EXERCISE Rewrite the sleepus function above to use .await instead of and_then.

The rewrite is really easy. The body of the function becomes the non-right-stepping, super flat:

println!("Sleepus 1");
sleep(Duration::from_millis(500)).await;
println!("Sleepus 2");
sleep(Duration::from_millis(500)).await;
println!("Sleepus 3");
sleep(Duration::from_millis(500)).await;
println!("Sleepus 4");

And then we also need to change the signature of our function to use async, or wrap everything in an async block. Your call.

Besides the obvious readability improvements here, there are some massive usability improvements with .await as well. One that sticks out here is how easily it ties in with loops. This was a real pain with the older futures stuff. Also, chaining together multiple await calls is really easy, e.g.:

let body = make_http_request().await.get_body().await;

And not only that, but it plays in with the ? operator for error handling perfectly. The above example would more likely be:

let body = make_http_request().await?.get_body().await?;

main attribute

One final mystery remains. What exactly is going on with that weird attribute on main:

#[async_std::main]
async fn main() {
    ...
}

Our sleepus and interruptus functions do not actually do anything. They return Futures which provide instructions on how to do work. Something has to actually perform those actions. The thing that runs those actions is an executor. The async-std library provides an executor, as does tokio. In order to run any Future, you need an executor.

The attribute above automatically wraps the main function with async-std’s executor. The attribute approach, however, is totally optional. Instead, you can use async_std::task::block_on.

EXERCISE Rewrite main to not use the attribute. You’ll need to rewrite it from async fn main to fn main.

Since we use .await inside the body of main, we get an error when we simply remove the async qualifier. Therefore, we need to use an async block inside main (or define a separate helper async function). Putting it all together:

fn main() {
    async_std::task::block_on(async {
        let sleepus = spawn(sleepus());
        interruptus().await;

        sleepus.await;
    })
}

Each executor is capable of managing multiple tasks. Each task is working on producing the output of a single Future. And just like with threads, you can spawn additional tasks to get concurrent running. Which is exactly how we achieve the interleaving we wanted!

Cooperative concurrency

One word of warning. Futures and async/.await implement a form of cooperative concurrency. By contrast, operating system threads provide preemptive concurrency. The important different is that in cooperative concurrency, you have to cooperate. If one of your tasks causes a delay, such as by using std::thread::sleep or by performing significant CPU computation, it will not be interrupted.

The upshot of this is that you should ensure you do not perform blocking calls inside your tasks. And if you have a CPU-intensive task to perform, it’s probably worth spawning an OS thread for it, or at least ensuring your executor will not starve your other tasks.

Summary

I don’t think the behavior under the surface of .await is too big a reveal, but I think it’s useful to understand exactly what’s happening here. In particular, understanding the difference between a value of Future and actually chaining together the outputs of Future values is core to using async/.await correctly. Fortunately, the compiler errors and warnings do a great job of guiding you in the right direction.

In the next lesson, we can start using our newfound knowledge of Future and the async/.await syntax to build some asynchronous applications. We’ll be diving into writing some async I/O, including networking code, using Tokio 0.2.

Exercises

Here are some take-home exercises to play with. You can base them on the code in this Gist.

  1. Modify the main function to call spawn twice instead of just once.
  2. Modify the main function to not call spawn at all. Instead, use join. You’ll need to add a use async_std::prelude::*; and add the "unstable" feature to the async-std dependency in Cargo.toml.
  3. Modify the main function to get the non-interleaved behavior, where the program prints Sleepus multiple times before Interruptus.
  4. We’re still performing blocking I/O with println!. Turn on the "unstable" feature again, and try using async_std::println. You’ll get an ugly error message until you get rid of spawn. Try to understand why that happens.
  5. Write a function foo such that the following assertion passes: assert_eq!(42, async_std::task::block_on(async { foo().await.await }));

December 02, 2019 04:00 AM

Chris Penner

Advent of Optics: Day 2

Advent of Optics: Day 2

Since I'm releasing a book on practical lenses and optics later this month I thought it would be fun to do a few of this year's Advent of Code puzzles using as many obscure optics features as possible!

To be clear, the goal is to be obscure, strange and excessive towards the goal of using as many optics as possible in a given solution, even if it's awkward, silly, or just plain overkill. These are NOT idiomatic Haskell solutions, nor are they intended to be. Maybe we'll both learn something along the way. Let's have some fun!

You can find today's puzzle here.


Every year of Advent of Code usually has some sort of assembly language simulator, looks like this year's came up early!

So we have a simple computer with registers which store integers, and an instruction counter which keeps track of our current execution location in the "program". There are two operations, addition and multiplication, indicated by a 1 or a 2 respectively. Each of these operations will also consume the two integers following the instruction as the addresses of its arguments, and a final integer representing the address to store the output. We then increment the instruction counter to the next instruction and continue. The program halts if ever there's a 99 in the operation address.

As usual, we'll need to start by reading in our input. Last time we could just use words to split the string on whitespace and everything worked out. This time there are commas in between each int; so we'll need a slightly different strategy. It's almost certainly overkill for this, but I've wanting to show it off anyways; so I'll pull in my lens-regex-pcre library for this. If you're following along at home, make sure you have at LEAST version 1.0.0.0.

{-# LANGUAGE QuasiQuotes #-}

import Control.Lens
import Control.Lens.Regex.Text
import Data.Text.IO as TIO

solve1 :: IO ()
solve1 = do
  input <- TIO.readFile "./src/Y2019/day02.txt" 
           <&> toMapOf ([regex|\d+|] . match . _Show @Int)
  print input

>>> solve1
["1","0","0","3","1","1","2"...]

Okay, so to break this down a bit I'm reading in the input file as Text, then using <&> (which is flipped (<$>)) to run the following transformation over the result. <&> is exported from lens, but is now included in base as part of Data.Functor, I enjoy using it over <$> from time to time, it reads more like a 'pipeline', passing things from left to right.

This pulls out all the integers as Text blocks, but we still need to parse them, I'll use the unpacked iso to convert from Text to String, then use the same _Show trick from yesterday's problem.

solve1 :: IO ()
solve1 = do
    input <- TIO.readFile "./src/Y2019/day02.txt"
               <&> toListOf ([regex|\d+|] . match . unpacked . _Show @Int)
    print input
>>> solve1
[1,0,0,3,1,1,2,3...]

Okay, so we've loaded our register values, but from a glance at the problem we'll need to have random access to different register values, I won't worry about performance too much unless it becomes a problem, but using a list seems a bit silly, so I'll switch from toListOf into toMapOf to build a Map out of my results. toMapOf uses the index of your optic as the key by default, so I can just wrap my optic in indexing (which adds an increasing integer as an index to an optic) to get a sequential Int count as the keys for my map:

solve1 :: IO ()
solve1 = do
    input <- TIO.readFile "./src/Y2019/day02.txt"
               <&> toMapOf (indexing ([regex|\d+|] . match . unpacked . _Show @Int))
    print input

>>> solve1
fromList [(0,1),(1,0),(2,0),(3,3),(4,1)...]

Great, we've loaded our ints into "memory".

Next step, we're told at the bottom of the program to initialize the 1st and 2nd positions in memory to specific values, yours may differ, but it told me to set the 1st to 12 and the second to 2. Easy enough to add that onto our pipeline!

input <- TIO.readFile "./src/Y2019/day02.txt"
           <&> toMapOf (indexing ([regex|\d+|] . match . unpacked . _Show @Int))
           <&> ix 1 .~ 12
           <&> ix 2 .~ 2

That'll 'pipeline' our input through and initialize the registers correctly.

Okay, now for the hard part, we need to actually RUN our program! Since we're emulating a stateful computer it only makes sense to use the State monad right? We've got a map to represent our registers, but we'll need an integer for our "read-head" too. Let's say our state is (Int, Map Int Int), the first slot is the current read-address, the second is all our register values.

Let's write one iteration of our computation, then we'll figure out how to run it until the halt.

oneStep :: State (Int, M.Map Int Int) ()
oneStep = do
    let loadRegister r = use (_2 . singular (ix r))
    let loadNext = _1 <<+= 1 >>= loadRegister
    let getArg = loadNext >>= loadRegister
    out <- getOp <$> loadNext <*> getArg <*> getArg
    outputReg <- loadNext
    _2 . ix outputReg .= out

getOp :: Int -> (Int -> Int -> Int)
getOp 1 = (+)
getOp 2 = (*)
getOp n = error $ "unknown op-code: " <> show n

Believe it or not, that's one step of our computation, let's break it down!

We define a few primitives we'll use at the beginning of the block. First is loadRegister. loadRegister takes a register 'address' and gets the value stored there. use is like get from MonadState, but allows us to get a specific piece of the state as focused by a lens. We use ix to get the value at a specific key out of the map (which is in the second slot of the tuple, hence the _2). However, ix r is a traversal, not a lens, we could either switch to preuse which returns a Maybe-wrapped result, or we can use singular to force the result and simply crash the whole program if its missing. Since we know our input is valid, I'll just go ahead and force it. Probably don't do this if you're building a REAL intcode computer :P

Next is loadNext, this fetches the current read-location from the first slot, then loads the value at that register. There's a bit of a trick here though, we load the read-location with _1 <<+= 1; this performs the += 1 action to the location, which increments it by one (we've 'consumed' the current instruction), but the leading << says to return the value there before altering it. This lets us cleanly get and increment the read-location all in one step. We then load the value in the current location using loadRegister.

We lastly combine these two combinators to build getArg, which gets the value at the current read-location, then loads the register at that address.

We can combine these all now! We loadNext to get the opcode, converting it to a Haskell function using getOp, then thread that computation through our two arguments getting an output value.

Now we can load the output register (which will be the next value at our read-location), and simply _2 . ix outputReg .= result to stash it in the right spot.

If you haven't seen these lensy MonadState helpers before, they're pretty cool. They basically let us write python-style code in Haskell!

Okay, now let's add this to our pipeline! If we weren't still inside the IO monad we could use &~ to chain directly through the MonadState action!

(&~) :: s -> State s a -> s 

Unfortunately there's no <&~> combinator, so we'll have to move our pipeline out of IO for that. Not so tough to do though:

solve1 :: IO ()
solve1 = do
    input <- TIO.readFile "./src/Y2019/day02.txt"
    let result = input
            & toMapOf (indexing ([regex|\d+|] . match . unpacked . _Show @Int))
            & ix 1 .~ 12
            & ix 2 .~ 2
            & (,) 0
            &~ do
                let loadRegister r = use (_2 . singular (ix r))
                let loadNext = _1 <<+= 1 >>= loadRegister
                let getArg = loadNext >>= loadRegister
                out <- getOp <$> loadNext <*> getArg <*> getArg
                outputReg <- loadNext
                _2 . ix outputReg .= out
    print result

This runs ONE iteration of our program, but we'll need to run the program until completion! The perfect combinator for this is untilM:

untilM :: Monad m => m a -> m Bool -> m [a] 

This let's us write it something like this:

&~ flip untilM ((==99) <$> (use _1 >>= loadRegister)) $ do ...

This would run our computation step repeatedly until it hits the 99 instruction. However, untilM is in the monad-loops library, and I don't feel like waiting for that to install, so instead we'll just use recursion.

Hrmm, using recursion here would require me to name my expression, so we could just use a let expression like this to explicitly recurse until we hit 99:

&~ let loop = do
              let loadRegister r = use (_2 . singular (ix r))
              let loadNext = _1 <<+= 1 >>= loadRegister
              let getArg = loadNext >>= loadRegister
              out <- getOp <$> loadNext <*> getArg <*> getArg
              outputReg <- loadNext
              _2 . ix outputReg .= out
              use _1 >>= loadRegister >>= \case
                99 -> return ()
                _ -> loop
   in loop

But the let loop = ... in loop construct is kind of annoying me, not huge fan.

Clearly the right move is to use anonymous recursion! (/sarcasm)

We can /simplify/ this by using fix!

fix :: (a -> a) -> a
&~ fix (\continue -> do
    let loadRegister r = use (_2 . singular (ix r))
    let loadNext = _1 <<+= 1 >>= loadRegister
    let getArg = loadNext >>= loadRegister
    out <- getOp <$> loadNext <*> getArg <*> getArg
    outputReg <- loadNext
    _2 . ix outputReg .= out
    use _1 >>= loadRegister >>= \case
      99 -> return ()
      _ -> continue
    )

Beautiful right? Well... some might disagree :P, but definitely fun and educational!

I'll leave you to study the arcane arts of fix on your own, but here's a teaser. Working with fix is similar to explicit recursion, you assume that you already have your result, then you can use it in your computation. In this case, we assume that continue is a state action which will loop until the program halts, so we do one step of the computation and then hand off control to continue which will magically solve the rest. It's basically identical to the let ... in version, but more obtuse and harder to read, so obviously we'll keep it!

If we slot this in it'll run the computation until it hits a 99, and &~ returns the resulting state, so all we need to do is view the first instruction location of our registers to get our answer!

solve1 :: IO ()
solve1 = do
    input <- TIO.readFile "./src/Y2019/day02.txt"
    print $ input
            & toMapOf (indexing ([regex|\d+|] . match . unpacked . _Show @Int))
            & ix 1 .~ 12
            & ix 2 .~ 2
            & (,) 0
            &~ fix (\continue -> do
                let loadRegister r = use (_2 . singular (ix r))
                let loadNext = _1 <<+= 1 >>= loadRegister
                let getArg = loadNext >>= loadRegister
                out <- getOp <$> loadNext <*> getArg <*> getArg
                outputReg <- loadNext
                _2 . ix outputReg .= out
                use _1 >>= loadRegister >>= \case
                  99 -> return ()
                  _ -> continue
                )
            & view (_2 . singular (ix 0))

>>> solve1
<my answer>

Honestly, aside from the intentional obfuscation it turned out okay!

Part 2

Just in case you haven't solved the first part on your own, the second part says we now need to find a specific memory initialization which results in a specific answer after running the computer. We need to find the exact values to put into slots 1 and 2 which result in this number, in my case: 19690720.

Let's see what we can do! First I'll refactor the code from step 1 so it accepts some parameters

solveSingle :: M.Map Int Int -> Int -> Int -> Int
solveSingle registers noun verb =
    registers
    & ix 1 .~ noun
    & ix 2 .~ verb
    & (,) 0
    &~ fix (\continue -> do
        let loadRegister r = use (_2 . singular (ix r))
        let loadNext = _1 <<+= 1 >>= loadRegister
        let getArg = loadNext >>= loadRegister
        out <- getOp <$> loadNext <*> getArg <*> getArg
        outputReg <- loadNext
        _2 . ix outputReg .= out
        use _1 >>= loadRegister >>= \case
          99 -> return ()
          _ -> continue
        )
    & view (_2 . singular (ix 0))

That was pretty painless. Now we need to construct some thingamabob which runs this with different 'noun' and 'verb' numbers (that's what the puzzle calls them) until it gets the answer we need. Unless we want to do some sort of crazy analysis of how this computer works at a theoretical level, we'll have to just brute force it. There're only 10,000 combinations, so it should be fine. We can collect all possibilities using a simple list comprehension:

[(noun, verb) | noun <- [0..99], verb <- [0..99]]

We need to run the computer on each possible set of inputs, which amounts to simply calling solveSingle on them:

solve2 :: IO ()
solve2 = do
    registers <- TIO.readFile "./src/Y2019/day02.txt"
               <&> toMapOf (indexing ([regex|\d+|] . match . unpacked . _Show @Int))
    print $ [(noun, verb) | noun <- [0..99], verb <- [0..99]]
              ^.. traversed . to (uncurry (solveSingle registers))

>>> solve2
[29891,29892,29893,29894,29895,29896,29897,29898,29899,29900...]

This prints out the answers to every possible combination, but we need to find a specific combination! We can easily find the answer by using filtered, or only or even findOf, these are all valid:

>>> [(noun, verb) | noun <- [0..99], verb <- [0..99]] 
      ^? traversed . to (uncurry (solveSingle registers)) . filtered (== 19690720)
Just 19690720

-- `only` is like `filtered` but searches for a specific value
>>> [(noun, verb) | noun <- [0..99], verb <- [0..99]] 
      ^? traversed . to (uncurry (solveSingle registers)) . only 19690720
Just 19690720

>>> findOf 
      (traversed . to (uncurry (solveSingle registers)) . only 19690720)
      [(noun, verb) | noun <- [0..99], verb <- [0..99]]
Just 19690720

These all work, but the tricky part is that we don't actually care about the answer, we already know that! What we need is the arguments we passed in to get that answer. There are many ways to do this, but my first thought is to just stash the arguments away where we can get them later. Indexes are great for this sort of thing (I cover tricks using indexed optics in my book). We can stash a value into the index using selfIndex, and it'll be carried alongside the rest of your computation for you! There's the handy findIndexOf combinator which will find the index of the first value which matches your predicate (in this case, the answer is equal to our required output).

Here's the magic incantation:

findIndexOf (traversed . selfIndex . to (uncurry (solveSingle registers)))
            (== 19690720)
            [(noun, verb) | noun <- [0..99], verb <- [0..99]]

This gets us super-duper close, bu