Advent of Haskell

A Journey Through The Toolchain


Introduction

Last year, @topaz ran a coding challenge called Advent of Code over the holidays.

It's a lot of fun!

The problem sets vary widely, which present an opportunity to get a good look at many aspects of a new or unfamiliar programming language. I did so with Haskell, the results of which I've cobbled together here to present in unified form.

I took advantage of a small project with only myself at the helm to really dig into diferent parts of the ecosystem: stack (the new Haskell build tool), various testing and benchmarking libraries, and some libraries like Repa to understand what high-performing Haskell code looks like. Some of what you'll see is very much overkill, but the purpose here is to see examples for the variety of tools and methods Haskell provides.

This whole exercise is intended to be a learning experience, so if you have any comments/questions/feedback/fixes for my horrible code, please do feel free to collaborate with me on GitHub!

Source

All of the source code is hosted on GitHub. The README.md is pretty straightforward if you want to build it yourself: stack makes the process eminently reproducible.

Documentation

Haddock is the de facto documentation generation tool, so the solution HTML documentation is provided as an example of what that process looks like. Note the Source links accompanying the functions and datatypes if you're curious about what inline Haddock documentation looks like.

Benchmarks

Criterion is a benchmarking utility that creates really fantastic metrics. As of this writing, I haven't yet completed benchmarking all of the functions in the challenge problem set, but the existing benchmarks give you a good idea of what it looks like.

Coverage

I used HPC to generate reports about code coverage. The reports cover statistics such as code branches that never get evaluated, expressions that only ever evaluate to one value, and other potential red flags.