Advent of Code by @topaz is sort of an institution at this point. I've done it a few times over the years, and usually use the event as an excuse to learn a new language.
It's a lot of fun!
The problem sets vary widely, which present an opportunity to get a good look at many aspects of a new or unfamiliar programming language. I did so with Haskell, the results of which I've cobbled together here to present in unified form. The dates here range back to 2015 - back then I was sort of green at Haskell, I've since done much more Haskell programming and may solve additional AoC challenges in the future using Haskell.
I took advantage of a small project with only myself at the helm to really dig into diferent parts of the ecosystem: stack (a Haskell build tool), various testing and benchmarking libraries, and some libraries like Repa to understand what high-performing Haskell code looks like.
Recent changes include relying on nix flake
for a dev environment/dependency management.
Some of what you'll see is very much overkill, but the purpose here is to see examples for the variety of tools and methods Haskell provides.
This whole exercise is intended to be a learning experience, so if you have any comments/questions/feedback/fixes for my horrible code, please do feel free to collaborate with me on GitHub!
All of the source code is hosted on GitHub.
The README.md
is pretty straightforward if you want to build it yourself: nix
and stack
makes the process eminently reproducible.
Haddock is the de facto documentation generation tool, so the solution HTML documentation is provided as an example of what that process looks like.
Note the Source
links accompanying the functions and datatypes if you're curious about what inline Haddock documentation looks like.
Criterion is a benchmarking utility that creates really fantastic metrics. As of this writing, I haven't yet completed benchmarking all of the functions in the challenge problem set, but the existing benchmarks give you a good idea of what it looks like.
I used HPC to generate reports about code coverage. The reports cover statistics such as code branches that never get evaluated, expressions that only ever evaluate to one value, and other potential red flags.