#88322 - yackom - Sun Jun 18, 2006 7:44 pm
Someone brought up texture compression for the DS when I talked about the problems in my DSQuake project.
I found out that the texture compression format is for in an image each 4x4 blocks of pixels, which you can choose up to 4 colors for. So at 4 colors for a 4x4 block it works out to be 4bits per pixel, or 1/2 that of 256 color palette, or 1/4 that of 16bit color.
Simple enough concept, so what 4 colors do you pick? So that is an ?Image Quantization? problem which is basically the same problem that gif compressors solve by picking 256 colors max from a whole image to represent it. The thing is we only have to compress 16 pixels at a time which is a lot less than an entire image for like a gif. So the algorithms that are popular for the full image case aren?t as high quality as we can get with our little 4x4 images because we can process them in a much more exact way via brute force. The algorithm I made isn?t as sophisticated like other methods that work on full images, but mine is much truer to the real problem and produces what I would assume for the 4 color or less case much better results. I figure why not use the best possible method if we can within a reasonable amount of time, these textures only have to be compressed once, and shouldn?t have to be compressed in real-time. If a real-time solution is needed the median cut algorithm, would probably be the best solution for the speed and quality [1].
I must admit it is really quite brutal on my desktop cpu, for a 128x256 size image its about 1 min to calculate, and for a 512x512 or so image is about 5 mins to calculate. I haven?t bothered to do any algorithmic optimizations yet, such as branch and bound, sorting the colors by distances to each other, or parallelize it for multicore processors. Once the optimizations are implemented should be able to do a 512x512 in 1 min or so.
So here is the pseudocode for compressing a block
Results,
In all of the images the original is on the left, the compressed one is on the right. 2c means 2 colors, 3c- 3 colors, 4c - 4 colors. The Shambler and rose image are small enough to put all 3 comparisons in the same file. Note the mandrill image is from a 256 color gif source, so for 256 color images a person can?t really tell.
http://yackom.com/texturecompress/
If people would like I could post my source code in a day or so after I clean it up a bit,
-yackom
Links:
[1] http://www.gamasutra.com/features/visual_arts/061997/a_few_good_colors.htm
this is a good and easy to read explanation for Paul Heckbert?s ?median cut algorithm?
http://www.efg2.com/Lab/Library/Color/AndComputers.htm#Quantization
Another good source for other newer color quantization algorithms.
I found out that the texture compression format is for in an image each 4x4 blocks of pixels, which you can choose up to 4 colors for. So at 4 colors for a 4x4 block it works out to be 4bits per pixel, or 1/2 that of 256 color palette, or 1/4 that of 16bit color.
Simple enough concept, so what 4 colors do you pick? So that is an ?Image Quantization? problem which is basically the same problem that gif compressors solve by picking 256 colors max from a whole image to represent it. The thing is we only have to compress 16 pixels at a time which is a lot less than an entire image for like a gif. So the algorithms that are popular for the full image case aren?t as high quality as we can get with our little 4x4 images because we can process them in a much more exact way via brute force. The algorithm I made isn?t as sophisticated like other methods that work on full images, but mine is much truer to the real problem and produces what I would assume for the 4 color or less case much better results. I figure why not use the best possible method if we can within a reasonable amount of time, these textures only have to be compressed once, and shouldn?t have to be compressed in real-time. If a real-time solution is needed the median cut algorithm, would probably be the best solution for the speed and quality [1].
I must admit it is really quite brutal on my desktop cpu, for a 128x256 size image its about 1 min to calculate, and for a 512x512 or so image is about 5 mins to calculate. I haven?t bothered to do any algorithmic optimizations yet, such as branch and bound, sorting the colors by distances to each other, or parallelize it for multicore processors. Once the optimizations are implemented should be able to do a 512x512 in 1 min or so.
So here is the pseudocode for compressing a block
Code: |
*Put all of the input colors into one group, in an array of groups. For 1 to 3 // each iteration adds one color to the palette { *Select the group with largest Euclidean distance between all pixels and those pixels mean average. *Subdivide the selected group into two groups by generating all possible combinations of two sets the same Euclidean distance between all pixels and their groups mean average pixel, and save the group set which has the minimal total distance. } *For each input color output it?s group?s mean average for its color. |
Results,
In all of the images the original is on the left, the compressed one is on the right. 2c means 2 colors, 3c- 3 colors, 4c - 4 colors. The Shambler and rose image are small enough to put all 3 comparisons in the same file. Note the mandrill image is from a 256 color gif source, so for 256 color images a person can?t really tell.
http://yackom.com/texturecompress/
If people would like I could post my source code in a day or so after I clean it up a bit,
-yackom
Links:
[1] http://www.gamasutra.com/features/visual_arts/061997/a_few_good_colors.htm
this is a good and easy to read explanation for Paul Heckbert?s ?median cut algorithm?
http://www.efg2.com/Lab/Library/Color/AndComputers.htm#Quantization
Another good source for other newer color quantization algorithms.