ref: c98230dedff6cbb66d3d69fb5b424267b13db77a
parent: 0417e6335e33c99b84561b17aeba68c118c6f94f
author: Simon Tatham <anakin@pobox.com>
date: Mon Jun 20 13:32:45 EDT 2005
Conversation with Richard and Chris yesterday gave rise to a more sensible means of generating an initial gridful of rectangles. This was previously a stupidly non-scalable bit of the Rectangles puzzle generator: it filled a ludicrously large array with every possible rectangle that could go anywhere in the grid, picked one at random and winnowed the list by removing anything that overlapped that one, then repeated until the list was empty (and therefore the grid was full except for remaining singleton squares). Total cost was O(N^4) in both time and space; not pretty. Richard and Chris's sensible alternative was to place each rectangle by randomly choosing a so-far-uncovered _square_, and then picking a random rectangle from the possible ones covering that square. This means we only have to deal with a small fragment of the rectangle list at any one time, and we don't have to store the whole lot in memory; so it's _much_ faster and more scalable, and has virtually no memory cost. A side effect of this algorithmic change is that the probability distribution has altered. When you line up all the possible _rectangles_ and pick one at random, then obviously the small ones are going to be in the majority since lots of small ones can fit into the space taken up by any given big one. So the original algorithm tends to favour fiddly grids full of lots of tiny rectangles, which don't tend to be very interesting. But if you first pick a square and then think about the rectangles that can surround that square, the small ones are suddenly going to be in the _minority_ because there are only two ways you can place (say) a 2x1 containing a given square compared to 36 ways you can place a 6x6. So this algorithm favours more large rectangles, which I generally consider to be an improvement. [originally from svn r5982]