What is Advent of Code? Well, let’s make it easy for ourselves and just quote the official description:
Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as a speed contest, interview prep, company training, university coursework, practice problems, or to challenge each other.
Every year, several of us here at Cygni take the opportunity to improve our problem-solving skills with the daily puzzles. We have a dedicated Slack channel where solutions are posted and discussed, typically displaying an impressive range of languages and attack angles, and an internal leaderboard provides a good-natured informal competition.
While the puzzles never explicitly state which type of algorithm, technique, data structure etc. should be used to find the solution, it is often (but not always) clear from the context, at least with the benefit of hindsight. To wit, some puzzles focus on recursion, some on pattern matching, some on math theorems, and sometimes they are tailored to specific, known algorithms -- this year, for example, the shunting-yard algorithm loomed large in one of them. In other words, finding the solution is often just as much about figuring out the most suitable approach (which is usually the intended approach) as it is about writing the actual code.
In the early puzzles the particulars of the solution often don’t matter too much, but as they get progressively (but not linearly) harder, the paths to victory get fewer and narrower. Each puzzle is split into two parts, where the second builds upon the first in some way or another, and it is common for the second part to increase the number of required iterations by several orders of magnitude, which will immediately make any naive solution to the first part unworkable.
OK, OK! Do note that there are spoilers for the 2020 event from here on, so consider yourself warned.
The second part of day 10 concerns itself with counting all possible arrangements of differences between subsequent “joltage ratings”, which is basically equivalent to counting the number of possible paths through a graph. Skipping over the part that computes which nodes are reachable from where, here’s a naive recursive solution in TypeScript:
When called with the first node (the first rating value), this function computes the correct result for the provided example data by recursively counting and adding up the number of reachable nodes. It does, however, do so very inefficiently since all possible paths will be traversed many times, and while this does not impact the test cases it would make the execution time for the actual puzzle data frighteningly long.
So, how can we fix it? Well, if the same paths are traversed many times, with the same result, why not save those results once they’ve been computed so that we can just reuse those values the next time we need them, rather than having to recompute them every time? Here’s an updated function which does just that by way of a dynamically populated cache:
The recursive logic is exactly the same, but now the number of iterations is drastically reduced since each subpath only needs to be traversed once, so with this function the solution is arrived at virtually instantaneously. This technique of storing previously computed return values for quick lookup on subsequent calls is generally known as memoization.
Here the second part is about identifying a number of unknown fields using rules for which field values are valid for which field type. For example, to reuse the test data from the actual puzzle, if you have the following three sets of similarly ordered field values:
3, 9, 18 15, 1, 5 5, 14, 9
...and apply the rules to find out that the valid values for the three available field types are these:
9, 18, 15, 1, 5, 14
3, 9, 18, 15, 1, 5, 14
3, 9, 18, 1, 5
...the possible positions in which each field type can occur are those where all field values are valid for that field type:
1, 2, 3
In other words, the problem is basically equivalent to solving an mathematical equation system, and it is readily apparent that the answer to this test case is row, class, seat since that is the only field order that satisfies all conditions.
A naive solution here is to walk through all possible permutations of field positions until a match is found, and as usual this is no problem with the example data, but the actual puzzle data are much more extensive, making the permutation count increase exponentially.
One Cygnian first tried a so-called backtracking depth-first search, which resulted in 150,504,690 iterations and finished in 10 minutes. Having thought more about it during the day, he came back in the evening with a solution that completed in under a millisecond. Now that’s some improvement! Here’s my own solution, again using TypeScript:
FieldRule interface and the
isValidValue function are not important here, but let’s break the rest of it down. First an index of which field types can occur in which positions is generated, i.e. the final preparation step in the example outlined above. This index is then iterated over looking for 1:1 matches, i.e. entries where a field type only has one possible position, saves the identified position, and then removes all occurrences of that position from the index. This process repeats until all positions are exhausted, at which point we will have a full field position list provided that the input data had sufficient information to identify all positions -- which is the same constraint that applies to an equation system.
Each year tends to have at least one day which is significantly more difficult than the others, which is reflected in the puzzle completion statistics on the AoC site, and this was one of them this time around. The problem involves keeping track of the positions of specific values (represented by cups) as they are shuffled around a looped collection (i.e. it wraps around to the start when the end is reached) according to certain rules, and so a naive solution here is to use whatever simple collection data structure is available in your language of choice and just move its elements around accordingly.
Unsurprisingly this works perfectly well for part 1, where only 100 moves with 10 cups need to be performed, but just as unsurprisingly part 2 ups the requirement to no less than 10,000,000 moves with 1,000,000 cups, which immediately puts the focus on the weakness of this type of approach.
This day sparked a fair bit of discussion in our Slack channel, and we bounced various ideas back and forth, some of which included looking for repeating patterns which could remove the need for performing the full set of iterations. In the end we all zeroed in on the choice of data structure as the primary point of importance.
But which structure, if a plain collection doesn’t cut it? Well, what we’re actually interested in in this case is not so much the collection as such, but rather the order between individual elements – specifically, which element follows which. For that sort of thing, a linked list is perfect:
Since the elements are just plain numbers in this formulation, the linked list can be implemented as a straight numeric index, which is more efficient than one based on object instances. The
getCupList function could well be substituted for simple index lookups here, but it has been included because it is also used for evaluating the final result in both part 1 and part 2 in the full solution. It also illustrates a drawback of linked lists: elements are not ordered, so extracting a slice requires traversing each link in turn which will have its own performance impact for large chunks, but in this case the advantages of the structure itself vastly outweigh this weakness.
With this approach no element is actually moved, and the only things which do change are a small subset of the references (links) between elements. In this way the full iteration count of 10,000,000 is no longer daunting, and the code above completes in about a second with that load.
As the description at the top states, AoC can be used for a multitude of reasons, with or without specific goals. I will submit, however, that there are certain insights that will always creep up on participants whether they expect it or not. Here are some of mine:
Performance matters. Really, it does. Back in the olden days, when the average computer would have been eclipsed by even the most basic phone of today, there was little choice in the matter: if you wanted any heavy workload to complete in any semblance of reasonable time, you had better make sure that your code was performant. Nowadays, with the explosion of processing and storage resources we have seen, day-to-day tasks often don’t need to be optimized in any meaningful way since the effect will be negligible.
Not so in AoC! Beyond the first few puzzles, it is more common than not for part 2 to blow the latter view out of the water. The problems have obviously been specifically designed for this effect, thereby forcing perhaps unfamiliar thought patterns on us poor developers: What if we have to optimize?
The nature of these optimizations may or may not involve language-level constructs and techniques, and in many cases the way forward is to change one’s approach completely. A simple collection may well suffice to represent a small cup game, but it quickly becomes untenable for a large one, so instead of trying to optimize the code that uses this collection, the way forward is to choose another data structure -- which also works just as well for the initial case. Algorithm quality matters just as much as code quality -- or more.
One Cygnian upped the ante even further by attempting to write solutions in MicroPython for his ESP8266 chip, the hardware of which is severely restricted. This made for some unique challenges, such as that the input data did not necessarily fit in memory and needed to be processed and often also converted to a more compact format one record at a time, and execution times were often counted in seconds even for simple problems. Talk about optimization pressure!
While premature optimization is the root of many evils, necessary optimization is another matter entirely. There is no shortage of examples where a top-of-one’s-head solution to a problem trundles along nicely 99 % of the time, but goes down in flames in that final percentile when it suddenly arrives -- and it is virtually guaranteed that it ultimately will. In other words, any production-level implementation needs to be able to handle the maximum workload the system can be expected to receive, or at least have a way to handle its own inability to handle it, especially in cases of unexpected workloads.
AoC drives this point home with force in a simple and accessible way. The description also says that “every problem has a solution that completes in at most 15 seconds on ten-year-old hardware”, which means that we know that such an optimization is possible -- something which is not always the case in the wild. Since my chosen environment this year was a self-contained Vue 3 frontend app I was further limited by the single-threaded, UI-blocking browser platform, which really put the onus on me to find such solutions. Nothing like a little UX pressure, eh?
I’ve used the term “naive solution” several times, by which I mean a solution that on first glance appears to fit the bill, but when analyzed further is found to contain unwarranted assumptions or limits in logic which make it unusable for non-idealized input. That does not make such a solution wrong per se -- just incomplete. (For a loose physics analogy, compare with the progression from Newtonian mechanics via Special Relativity to General Relativity.)
The idea of iterative software development, which underpins much of the prevailing agile development mindset today, therefore flows naturally from many AoC puzzles. In some cases the first solution you come up with, “naive” or not, may be perfectly adequate, in others it will serve as a stepping stone to a better solution -- or, equally valid, as an identified dead end which can then be removed from the set of possible ways forward.
First, make it work at all. Then, make it work better. Repeat until satisfactory. The thing about AoC is that it often forces that last repetition step on you, which can otherwise be in danger of being left by the wayside because of reasons. Refactor, refactor, refactor!
Testing and fault tolerance
All AoC puzzles include at least one example, complete with (limited) input data and the expected result(s). The route to testing your puzzle code is therefore wide open, whether formalized in an actual test runner or as a looser dynamic part of the development process, with the test cases practically writing themselves. What a luxury, eh?
However, don’t get too giddy. The quality and value of a test suite are only as good as the quality and value of its test cases – it is entirely possible to end up with an application that fails to work in a multitude of ways despite the test stage’s showing green across the board, even with high code coverage. It is not enough just to test, but you must be confident about what to test and how to test it in order for the results to carry meaning, and to make sure that the time taken to write those tests in the first place could not have been better spent elsewhere (such as writing other, more suitable tests).
For example, several of us encountered situations where our code would work perfectly fine for the example data, but fail completely on the actual puzzle data -- not for performance reasons as per above, but by simply yielding the wrong result. This was typically because those examples were incomplete representations of the full range of rules that the puzzle solutions needed to adhere to, and therefore the test cases did not catch certain errors or omissions which the full data triggered.
This brings us to a related point: input validation and fault tolerance. The inputs that AoC provides are generally well-behaved, with consistent formatting and no invalid values (i.e. values not adhering to the specifications of the puzzle at hand), and relying on this state of affairs is therefore not usually associated with a penalty. It is also often the case that the data provided are somewhat limited even within these specifications, regarding both examples and actual puzzle inputs, which frequently enables enticing shortcuts.
The first of these remarks does not always hold true, however, as several of us discovered at some time or another when a single line break too many or too few in the input file threw our parsing routines off the rails, leading to fruitless bug hunts in algorithm code which turned out to be entirely correct -- it just needed properly parsed input.
Speaking for myself, I made a conscious effort this year to write code which was both tolerant of unexpected input formatting and performed runtime error checking on the data being worked upon. In nearly all cases this was not actually necessary in order to solve the puzzles, but I took it upon myself to maintain a more production-level mindset -- see it as professional practice. A good rule is never to trust external input and always expect the unexpected, so to speak.
To take a concrete example, day 18 was about parsing and evaluating arithmetic expressions with unusual rules. As it happens the provided inputs contained only single-digit positive integers, which made the tokenization step fairly simple as all tokens – including spaces – were one character long. Having arrived at the answers for both parts in such a naive fashion, I then went back and expanded my parser with support for multiple-digit numbers, negative numbers and irregular whitespace, plus error handling for truly invalid expression formats, and added a slew of new test cases for verification. This was not necessary for the puzzle in any way (and still lacks support for decimals), but it just felt better!
Different languages come with different built-in constructs, standard libraries and tools that are more or less well suited for specific problems. With its self-contained puzzles and limited scope, AoC provides a perfect platform for exploring these differences -- and it is also the perfect opportunity to learn a new language altogether.
Since us Cygnians are a very diverse bunch, our weapons of choice tend to be quite diverse too. For example, one Cygnian has this to say about Rust:
I use a lot of iterators, because they open up for thinking more functionally and are often optimized away completely by the compiler. Enums and pattern matching are also very nice, e.g. for state machines.
This snippet from his solution for day 3 demonstrates the first point:
And a more complex parser from day 20, taking further advantage of type coercion:
Languages like Python and perhaps even more so Julia are also very good at handling problems involving collections, iterations and combinations (an easter egg in one of this year’s puzzles even explicitly mentions
itertools). For example, the solution to both parts of day 1 can be implemented as succinctly as this in Python, courtesy of another Cygnian:
Julia provides a very usable shortcut for collection operations by prefixing an operator or function with a dot. Here’s yet another Cygnian’s solution for day 10, which also substitutes the memoizing approach outlined above for a reversed traversal:
Specific language features can offer shortcuts which sometimes border on “cheating”, by way of offloading some otherwise more or less complex logic to the compiler or runtime environment. One of the people who got onto the global leaderboard for day 18 hooked into Julia’s internal expression parser to redefine operator precedence, saving him from having to parse the input himself:
Here’s a somewhat less egregious TypeScript hack of my own for day 5, which takes advantage of internal numeric conversion of binary literals, returning
NaN in case of invalid characters anywhere in the input string whereas
parseInt with a radix would just stop at the first invalid character:
I think Tailspin makes for nice and compact solutions. I especially appreciate parsing the input data as soon as it gets a bit more complicated, and how you can make the input sensible directly without a lot of string manipulation.
So, again, much can be gained by solving the puzzles in different languages. It could make you appreciate the features of a particular language more, for a particular type of problem, or make you realize that that language perhaps isn’t the best choice for the problem at hand -- or, preferably, enable you to carry over solutions and patterns which were easily arrived at in one language to another, putting it within easy reach the next time it’s needed.
It is also interesting to look at how different programming languages make us think differently, and how strong idioms can be. Sometimes it’s simply not worth porting a solution to a language where it is no longer idiomatic, if the readability suffers so much that you -- or your team mates, present or future -- have trouble understanding and maintaining it, even if it may in fact be more efficient.
To sum up, solve the puzzles using multiple languages, and especially multiple paradigms, if you can. The insights thereby gained are invaluable -- and, it’s fun! For the 2019 event I made an attempt to use as many languages and environments as possible, and (with some generous interpretation of differences) ended up with twelve separate paths, including bygone ones like Turbo Pascal and 16-bit DOS-based x86 assembly, just for the heck of it. How many can you manage?
So, whatever reasons you may have for joining Advent of Code, it will challenge you in various and sometimes unexpected ways, and by the end of it you will be a better developer than when you started, whether this was your intention or not. Now isn’t that a good way to spend a deep and dark December?
See you next time!