#12299 - johnny_north - Fri Nov 07, 2003 3:25 am
Any of you folks have compression experience? I'm wondering which of the bios decompression types yield the best results on say 256 color graphic data? Also, on this type of data, does using a difference filter yield better results with all of the compression types (Huffman, LZ77, Run Line)?
#12301 - DekuTree64 - Fri Nov 07, 2003 4:13 am
Just test out different compression methods with GBACrusher and see how it goes. Different pictures will get different results. According to my tests, LZ77 is generally the best for graphics, and difference filter sometimes makes things larger and sometimes smaller, but either way not by very much. Not really sure what it's good for, but there must be something if they put routines into the BIOS for it.
4-bit huffman works good on 16 color tiles sometimes too, and run length is great for pictures that have big flat color areas.
_________________
___________
The best optimization is to do nothing at all.
Therefore a fully optimized program doesn't exist.
-Deku
#12302 - johnny_north - Fri Nov 07, 2003 4:50 am
I'm fairly impressed with my initial go:
I'm using the duplicate tile optimizer from gfx2gba .013 which saves a lot to start with on moderately compex 600 tile (240x160 mode 0) graphics. I'm getting an average of 150 tile reduction on 25 pics per pic.
lz77 gets another average 70% reduction on those remaining tiles.
lz77 palette reduction is about 50% which is predictable when using an average of 128 colors of a 256 entry palette.
lz77 is a toss up on 600 entry maps from gfx2gba. Several compress as far as 62% while on others I'm getting a loss of as much 11%. It nets a small savings though.
According to Martin Korth's gbatek.htm doc on the difference unfilters:
Quote: |
SWI 22 (16h) - Diff8bitUnFilterWram
SWI 23 (17h) - Diff8bitUnFilterVram
SWI 24 (18h) - Diff16bitUnFilter
These aren't actually real decompression functions, destination data will have exactly the same size as source data. However, assume a bitmap or wave form to contain a stream of increasing numbers such like 10..19, the filtered/unfiltered data would be:
unfiltered: 10 11 12 13 14 15 16 17 18 19
filtered: 10 +1 +1 +1 +1 +1 +1 +1 +1 +1
In this case using filtered data (combined with actual compression algorhytms) will obviously produce better compression results. |
If I'm thinking about it correctly though, it seems like this might be more appropriate for photo like graphics or music, where there are lots of long incremental gradient changes, like shades of blue in a photo of the sky.
Still I haven't had much luck in finding algorithms or code for an appropriate difference filter. If anyone has a good reference I'd appreciate it.
#12713 - Miked0801 - Mon Nov 24, 2003 8:55 am
One thing to be careful of with differential filters is that they can increase your base data size. As an example, take the values 0, 1, 200, 0. An 8-bit diff filter returns 0, 1 , 199, -200 (which overflows 8-bits and if your compressor isn't careful, destroys your data.)
Some other common filters are as follows (assume data is A,B,C,D):
Average: (0+A/2), (A+B)/2, (B+C)/2, (C+D)/2 (beware of odd bits)
Interleave in 2 or n streams: (A,C,E,... in 1st stream), (B,D,F,... in 2nd) This worked great on GBC titles I've worked on in the past with 16-bit and sprite data.
2x2 average : (take a 2x2 image area, add together, then average)
Look on the net for compression filters for other examples. JPEG uses a few and is a good source for ideas.
Other notes:
There are better compression types out there than the built in ones. The built in ones also run fairly slow in comparison to what one could write and place in fast ram. PUCrunch is my personal favorite for out-right best compression - though if you take the code given for decompression as is, it's really slow to decompress. Still, in my tests for data > 128 bytes, it beats all other compressions 99.9% of the time.
Mike
#12719 - col - Mon Nov 24, 2003 12:33 pm
Miked0801 wrote: |
One thing to be careful of with differential filters is that they can increase your base data size. As an example, take the values 0, 1, 200, 0. An 8-bit diff filter returns 0, 1 , 199, -200 (which overflows 8-bits and if your compressor isn't careful, destroys your data.)
|
eh?
-200 masked to 8 bits is 56.
(200 + 56 = 256. 256 masked to 8 bits is 0)
so the diff filter (should?) return 0, 1, 199, 56 which dosn't overflow anything, and equals the original base data size!
cheers
Col
#13020 - Miked0801 - Wed Dec 03, 2003 7:33 pm
I'll do the decompress then as well to show the issue.
0, 1, 200, 0 -> 8-bit filter gives
0, 1, 199, -200 (56 as it rolls over) -> convert back to normal
0, (1+0 ) 1, (1+199)200, (200+56), 0
Ok, I'm way wrong. Thanks for pointing that out :)
Mike
#14679 - MrMr[iCE] - Sat Jan 10, 2004 11:23 pm
Something to think about is how the graphics are drawn. If you have a lot of large areas filled with a single color, rle is a champ. Stuff like 3d objects with lighting and shadows, or objects with antialiased borders, will have a lot more shading and gradients, which lz77 or another algorithm will compress better
The amount of colors is a big factor as well. An image with only 16 or 32 colors will compress much more than one with 128 or 256 colors. While it might look bad with less colors on a pc monitor, the gba's screen is so small you really wont notice the difference, especially if its animated or moving fast. Take a look sometime at Samus' sprites in Metroid Fusion, youll see what I mean. They use a very small palette, but because shes constantly moving you really cant notice it.
If you do your graphics with photoshop, pay attention to the color table when you convert an image to indexed color. Photoshop loves to stuff in a lot of colors that are off by only a few shades. Sometimes images with a black backround will end up with many different shades of black in the color table, especially when resizing or working with rendered images. You can pick them out easily by selecting the "dupe colors" in the color table dialog one by one and changing them to a high contrast color, a color not used in your image, then do a select->color range and filling the high contrast stuff with the "real" color on the image. Then you can cut the dupe entries out, leaving only the ones you need.That alone can reduce an image size in half after compression.
_________________
Does the corpse have a familiar face?