Wednesday, March 9th, 2011...18:07


Jump to Comments

A some point last year I became infatuated with a game commonly known as Flood-It by a company called LabPixies. My obsession resulted in the creation of a clone I call
C O L O R F L O W which addresses the major faults I found with the game, namely there was no way to:

  • undo an errant move
  • play the same board multiple times to improve on my own score
  • know if my sequence of moves was optimal
  • play the same board a friend had just played

First I needed a way to represent the state of the board and the game playing functionality. There are 6 colors randomly spread over a 14 x 14 grid. My first instinct was to use a list of lists with the value of each element being a number between 0 and 5 inclusive, but eventually I decided to use a single dimension 196 element list moving left to right and top to bottom e.g. in a 3 x 3 grid:


This allowed me to reference any element on the board with a single number instead of two (x, y) which made things a bit more efficient.

Changing the colors of squares on the board requires knowing which squares are currently “flooded” and which adjacent squares are of the new color which will become “flooded”. Given a square, 42, in a 14 x 14 grid it’s easy to see that square 28 is above it, 56 is below it, 43 is to the right and there is no square to the left. While computing those is trivial, for efficiency reasons, I decided to build up a dictionary of squares->neighbors. Those two data structures were all I needed in order to write the color changing functions. I later added a set to keep track of which squares had been “flooded”.

At this point I could play a sequence of colors and ask the board whether it was flooded or not, so I decided to write a program to flood a given board in the fewest moves. The Flood-It version of the game caps the number of color changes for a 14 x 14 board at 22. At any point in the game you have a choice of up to 5 colors for the next move, so that’s 5^22 possible color sequences (2.4 quadrillion). Even if my code only took 1 microsecond to play each sequence of colors, it could take more than 75 year of continuous processing time to find the best one. Yes, there are cases where the branching doesn’t multiply by 5 at every step if fewer colors are valid candidates and yes if I found a color sequence of length 16 that worked right off the bat I could stop evaluating every subsequent sequence after 15 moves, but it would still take a very long time, especially since my code takes more than a microsecond to play a sequence of color changes.

So brute force was not an option. I experimented with a genetic algorithm as a sequence of colors mapped very easily to the idea of a chromosome. However, I found that crossover between color sequences often times created invalid color sequences. The compensation tactics (repairing chromosomes or using very large populations) were CPU intensive. I wanted this solver to finish in less than 5 seconds and the GA wasn’t even coming close.

My next strategy was to use a constrained lookahead. The idea was to generate and score all possible color combinations up to a fixed number of moves ahead e.g. 7. Then pick the best sequence and repeat until the board was flooded. This way I never had more than 5^7 sequences to evaluate so instead of 2.4 quadrillion, I only had to evaluate (5^7)*3 [234,375] for any game that could be solved in 21 or fewer moves. So how does one determine the best sequence of colors part way through the game?

I had already written the basis of a scoring algorithm in the fitness function of my genetic algorithm as a series of checks:

First, if the board is flooded, return 1 + (1 / length of color sequence). This always yields a number > 1 with higher numbers denoting shorter color sequences which flood the board.

If the board isn’t flooded, then early in the game it’s important to maximize the number of endpoints/squares which are candidates to be flooded on the next move. It’s also an important characteristic towards the end of the game if we have not eliminated many colors. Another important metric at any point in the game is the number of currently flooded squares. Finally, boards with fewer color candidates remaining are also always better. I combined all this data with simple math operators (+-*/) so I could sort/rank sequences of moves.

After some optimization, my solver was finishing a game in around 30 seconds. That wasn’t terrible, the bigger problem was that there were cases where only doing 5 or 6 moves of lookahead would result in finding a better sequence of moves than using a lookahead of 7. One flaw in my logic was that I was throwing away a lot of information by only keeping the best sequence after X moves. However, keeping even one additional sequence could double the runtime of my solver so I needed to refine my approach.

With a bit of tweaking I was able to pivot the algorithm to only play one move at a time but keep the best X number of sequences up to that point. After some trial and error I found 400 sequences to be a sweet spot in terms of speed and accuracy. Since 5^4 is 625, using 400 meant I would keep almost all color sequences covering the first 4 moves and then each subsequent move would only require evaluating at most 2,000 sequences from which I would cull the top 400 and repeat. For a 20 move board that’s at most 40,000 sequence evaluations, most of which were no where near 20 items in length. With this algorithm, my solver can figure out a reasonable solution to a board in about 4 seconds. In 25 test games it could flood the board in 20.6 moves on average and only 3 times did it require more than 22 moves (23). Also as of this writing, neither I, nor anyone else who has played the game online, has been able to flood a board in fewer moves than the solver…

My work up to this point had been done in Python and since I intended this game to be playable by humans in a web browser I decided to use Google AppEngine since it meant I could easily build upon my current code and not have to deal with too much infrastructure setup. I did want to keep track of some high level game statistics so I created a minimal datamodel that could store the seed used to randomly generate the board, an initial move sequence that would solve the game and a dictionary of game completion counts to number of moves. This fit very nicely into the GAE datastore. The last piece was the UI.

I ported the game functionality (not the solver) to HTML/CSS/JavaScript and added some bookkeeping to enable redo/undo. I purposely laid out the UI so it would look good on most smartphones with the tradeoff being a lot of wasted space in a desktop browser. There are ways to change the display based on the device, but they all seem fraught with issues that I didn’t want to deal with. The histogram generation of previous game completions is handled via Google Charts and I integrated with the Facebook Dialog UI to make it easy to challenge friends.

The highlevel sequence of events is:

  1. Look in the datastore for a game with the given seed (default 0)
  2. If the game doesn’t exist, create it and solve it
  3. Return the game state to the browser
  4. All gameplay takes place in the browser with no further server-side contact
  5. When the board is flooded record the score on the server

It’s worth noting that upon completion of the game, the move history is sent to the server and it replays the move sequence to verify the board is actually flooded to protect against cheating. The source code for my version is available on GitHub. Please give C O L O R F L O W a try and let me know what you think.

P.S. I found a C++/C# programming challenge to write a solver for the Flood It game. My solver, configured to keep the top 400 sequences (the configuration I’m using on AppEngine) ran through the problem set using 20266 moves. That would have ranked me in 3rd place. However, the article claims the winner’s C++ code ran for over 20000 seconds. My algorithm only took 3264 and was written in Python. Even accounting for the speedup in computers since 2008, it’s likely that the winning algorithm was searching deeper in the problem space.

Comments are closed.