I don’t think I’ve done quadtree image compression before (or at least if I have, it would have been years ago), but I know what a quadtree is, so I may be able to help.
You’ll definitely need more than 2 bits per node because a branch has 4 sub-nodes. That’s pretty much the definition of a quadtree.
The other problem is that if a sub-node is present you have to somehow know where it’s located, and most of the ways to do that are going to start costing decent chunks of memory.
This is probably going to get very tricky, particularly on a microcontroller, and particularly when your images are already 1 bit per pixel.
Having thought about it for a bit, I’ve thought of several possible ideas. It’s hard to know what will work out best without some example data to test with, but I’ll throw some thoughts at you anyway…
If you start with a branch node is structured like this:
Where each subnode is an offset in nodes, and thus must be multiplied by
sizeof(Branch) to locate the node it refers to. (This is done purposely to give you a wider range for fewer bits, and to avoid the problems that come with variably-sized nodes.)
Obviously you only really need 1 bit to identify the colour, so if you really wanted you could drop that byte and steal a bit from one of the subnodes, but that could get awkward.
However, what you could do instead is instead of treating that byte as a simple ‘black’ or ‘white’, you could also include common dither patterns, or an ‘inverse’ bit, or a draw mode indicator. That way if you’re forced to spend a whole byte you don’t have to waste a whole byte. The downside is that you’ll end up chewing up CPU cycles doing fancy patterns, and it’ll be more progmem to implement the patterns, but nothing in programming is free - everything’s a tradeoff.
Since a branch node already represents a block of colour, representing a leaf node as a colour and four null offsets seems a bit wasteful considering only one of those ~40 bits would actually be meaningful.
So here’s the clever part: instead of wasting those 4 subnode bytes, you can reuse them as image data. Unfortunately that only gives you 4 rows of data, so you may have to double-up the data somehow. Or again, you could use dither patterns, or you could somehow use those bytes as indices into an array of small sprites (more like tiles) included with the data.
The interesting thing about that is that it should help to mitigate one of the problems with compression - detailed images don’t compress as well. This way even if you end up with a few small detailed sections, you can include that detail verbatim without too much extra cost.
Obviously this deviates from a pure quadtree system, but real-world image formats rarely are ‘pure’. The inclusion of bitmap data puts me in mind of the 64 discrete cosine transform patterns used in JPEG.
I’m still not sure quite how well this will actually work, particularly without sample data to work with.