January 29, 2023
In my first post on Hive, I wrote that Hive “requires careful design of data structures and algorithms to implement its movement rules efficiently.” Constraints like the one hive rule and the freedom to move rule, and even some of the piece movements themselves, require some thought.
However, it’s also easy to get lost in too much thought, especially at the beginning of a project. This is partially because it’s easier to imagine optimized code than it is to write it, and partially because programmers naturally don’t want to write a bunch of code, and then have to replace it all when they discover a better approach. So you plan, think, and maybe draw on graph paper, but never actually produce any code.
I’m not typically the sort of programmer that cares about execution speed over all else. The software development community has long realized that 1) there are a number of other factors other than speed that make high-quality code, such as readability, ease of maintenance, or robustness, 2) computers are fast enough now that it’s not often something to worry about, and 3) the sort of speed/efficiency problems we have nowadays are complex, involving things like scaling and network and disk I/O, and are not always solved by simply fixating on lowering CPU time.
But game-playing AIs algorithms are often CPU-bound. With the class of algorithms I tend to focus on, Monte Carlo tree search, faster code means more interations, which directly means stronger playing strength for a given amount of time. Playing strength is the whole goal of the project, so to a large extent execution speed does take priority over other concerns.
I love graph paper and thinking about optimization, so it’s easy for me to get caught in the trap. That’s doubly true for a game like Hive, where I know optimization is going to matter. Yet it’s not common to hit upon a perfect, optimized design at the beginning of the project.
In order to move from the graph paper to the code editor, I force myself to say “I guess I’ll just build it slow first.” Maybe I’ll have to brute-force a search, or recalculate something expensive every move instead of efficiently storing and updating it. I’ll accept that I’ll likely have to replace and rewrite code. But ultimately, you get closer to having fast code by writing code in the first place and iterating on it. Trying to get it right the first time takes a lot longer.
This, of course, is not some impressive insight. It’s one of the first lessons in the computer science book. “Premature optimization is the root of all evil” (which, evidently, was popularized but not actually coined by Donald Knuth). Still, the fundamentals need to not just be learned, but remembered, and ultimately mastered.