gbadev.org forum archive

This is a read-only mirror of the content originally found on forum.gbadev.org (now offline), salvaged from Wayback machine copies. A new forum can be found here.

Coding > Why 30FPS instead of 60FPS?

#170802 - brave_orakio - Wed Oct 21, 2009 5:15 am

Hi guys I was wondering why most professional games run at 30FPS instead of 60FPS? At least that what I remember Mike saying sometime back.

Currently my build has a sorting function, collision function on both background and other characters, a function to detect onscreen and offscreen characters and a single character with AI who follows the main character around.

With around 13 character in total(on and off screen) 1 AI and the controlled character included, the current build eats up around 20K cycles max.

I am guessing if I had more AI characters that would shoot to around 60K - 80K cycles since my current AI runs at around 2500 cycles and I plan to cap basic AI at around 5K cycles.

Now I am wondering I still have a hefty 200k or so cycles left per frame. In commercial games, what eats up those cycles? I am guessing it has to do with sound decompression and maybe even BG and sprite resource decompression? I haven't tackled decompression yet so I have no idea how much this taxes the ARM core of the GBA.
_________________
help me

#170804 - LOst? - Wed Oct 21, 2009 11:53 am

brave_orakio wrote:
Hi guys I was wondering why most professional games run at 30FPS instead of 60FPS? At least that what I remember Mike saying sometime back.

Currently my build has a sorting function, collision function on both background and other characters, a function to detect onscreen and offscreen characters and a single character with AI who follows the main character around.

With around 13 character in total(on and off screen) 1 AI and the controlled character included, the current build eats up around 20K cycles max.

I am guessing if I had more AI characters that would shoot to around 60K - 80K cycles since my current AI runs at around 2500 cycles and I plan to cap basic AI at around 5K cycles.

Now I am wondering I still have a hefty 200k or so cycles left per frame. In commercial games, what eats up those cycles? I am guessing it has to do with sound decompression and maybe even BG and sprite resource decompression? I haven't tackled decompression yet so I have no idea how much this taxes the ARM core of the GBA.


You should always aim for 60 fps. 30 fps is horrible for a trained eye ;)

About your question, commercial GBA and DS games run at 60 fps, unless something in the hardware prevents it (like having 3D shown on both screens on the DS using some hacking).
The other thing that I can think of that eats CPU time is decompressing data. That includes streaming music (like using MP3).

I always find it hard to time stuff, and by that I try to optimize everything as much as possible. Still it is easy to write something wrong that might display the same frame for longer than 60 fps, and that will become 30 fps because of the waiting for vblank. I usually find it out sooner than later to fix it. I don't know about other programmers though. They might be missing it all the time :P

Look at it this way:
You are supposed to use every aspect of the GBA/DS hardware for your game, and when you do you will be able to run the game at 60 fps. That's what it was meant to do.
Once you step out of that box and start using software code, you need to do the very best possible! I think that is fair, don't you? ;)
_________________
Exceptions are fun

#170806 - Dwedit - Wed Oct 21, 2009 1:23 pm

They write for 30 FPS because you can use cheaper programmers to accomplish that speed.
_________________
"We are merely sprites that dance at the beck and call of our button pressing overlord."

#170815 - Miked0801 - Wed Oct 21, 2009 4:20 pm

Assume much Dwedit? My god that statement really bothers me.

Lost, how does your trained eye catch 30fps games? The human eye can only detect around 24hz or so. So what's your secret?

A game designed to run at 30hz that always runs at 30hz will play the same as a game that was designed to run at 60hz and runs at 60hz.

brave, here's a few things you didn't mention that eat large chunks of cycles:
Music/sfx. The more the music does, the higher the hit.
Heavy decompression of assets on the fly. This is usually our #2 cpu hit behind music and sound. Cart space is expensive.
Multiplayer processing (for MP games)
3D rendering time - either in buffer feeding or pulling.

In my current game, sound/sfx are the #1 CPU hit, decompression is #2, and special effects processing and setup is #3. And yes, it's locked at 30hz.

#170822 - FluBBa - Wed Oct 21, 2009 7:06 pm

Miked0801 wrote:
The human eye can only detect around 24hz or so.

Assume much Miked? My god that statement really bothers me.

Give me 5 seconds of gameplay from a 2D or 3D game and I can tell if it runs in 30 or 60Hz, 100% of the time. (and no, youtube doesn't support 60Hz so don't link there, gamersyde has some 60Hz clips though)
_________________
I probably suck, my not is a programmer.

#170823 - ritz - Wed Oct 21, 2009 7:36 pm

Interesting article:

http://amo.net/NT/02-21-01FPS.html (Part 1)
http://amo.net/nt/05-24-01FPS.html (Part 2)

#170827 - sajiimori - Wed Oct 21, 2009 10:27 pm

60fps rocks -- I love it.

But I've never shipped a 60fps game. :( Maybe someday...

I'm not going to say I have a "fast" DS engine. It's pretty good, but I won't call it "fast" until I run out of ideas for making it faster. As it is, I can comfortably do 30fps with lots of sfx, music, several characters with moderately fancy AI, streaming character animations, full-3D physics, and a maxed-out vertex buffer.

It's hard to imagine doing the same at 60, but it would be awesome!

iD Software's Orcs and Elves does 60 by keeping things simple. I still think it looks great, though. =)

#170828 - keldon - Wed Oct 21, 2009 10:30 pm

Our eyes don't see in frames - per say. Our eyes are responsive to the most minute changes, however the decay in our retinas allows us to retain the image from mere flickers. Also because of our brains' *probabilistic phase sequence analysis of all processed stimuli, we will also perceive subsequent frames of visual images providing they are displayed at a particular speed that does not fall too far behind the speed at which we prefer to process sequences.

Given that Formula One racers braking points were within inches of each other each lap, we can then calculate that eyes are accurate to about a millisecond (or maybe a few give or take the number of inches of difference in their braking points).

Either that or I've miscalculated ^_^

ADDENDUM:-

Regardless of the decay and the limitations of the retina and all physical pathways before the signals are converted to electrical impulses; one can still apply a Gaussian (or motion) blur to an image and make a fairly accurate approximation of the edges in the picture - especially if the factors are known. In fact you could consider a Gaussian blur of a pixellated moving, yet still image equivalent to our eyes seeing frames of animation. The frames are equivalent to the pixellation, and the blurring to the decay in the eye.

#170831 - sgeos - Wed Oct 21, 2009 10:55 pm

A 120Hz CRT looks a lot smoother than a 60Hz LCD.
Try... moving the mouse. =)

As I understand it, CRTs have blurring built in, which makes lower frame rates easier on the eyes, hence the ancient 24Hz technical standard.

#170833 - Miked0801 - Wed Oct 21, 2009 11:43 pm

:)

Mission accomplished on the 30/60 discussion. Of course you can detect the difference. 60hz games just feel a heck of a lot smoother and more responsive (and look nicer). I shouldn't troll, but I was feeling perverse this morning and wanted to see some responses. My appologies.

#170839 - brave_orakio - Thu Oct 22, 2009 2:29 am

Take it easy guys! heh.

Mike, your right, I haven't implemented those yet so I have no idea on the processor load for those.

I assume that BG resources and soundFX and music need to be decompressed. Don't know about sprites though? How many cycles does one need to decompress the three resources(or four if we include sprites) on average? No 3d rendering yet for me though, as I have no computer capable of running decent 3d modeling software, heh. For now just pure 2d.

And then how much for the processing for the actual sound and music "rendering"? I totally forgot that GBA has no dedicated sound chip and so it does eat away at the processor cycles.
_________________
help me

#170843 - Miked0801 - Thu Oct 22, 2009 5:30 pm

Depends on what type of compression(s) you use for your sound and graphics assets.

On GBA, heavily compressed assets being loaded dynamically all over the place took 7-10% of the overall CPU load at 30hz. Sound took 15-20% at 30hz. Double those for 60hz.

Ligther compressions such as RLE/LZ77 still take about 3-5% at 30hz.

DS, decompression still takes around 7% to 10% at 30hz. Bigger assets, more to do on the CPU. Sound varies depending on if we decide to use Midi or stream wav type info. Midi is cheap. Streaming is definately not.

Again, these number will vary a ton depending on what you are doing at the time. A heavily animating special effects explosion will eat a lot more CPU with decompression and sound than normal game play.

#170849 - brave_orakio - Fri Oct 23, 2009 2:32 am

Wow, that is heavy. Looks like I should tackle compression early on to see if I need to change a few things. What type of heavy compression did you use, or is it custom? Also, should sprites be compressed?

heh, I remember awhile back that Castlvania: HOD had incredible graphics but unfortunately crappy sound. I winder if that was on purpose to concentrate more of the processing in other areas.
_________________
help me

#170856 - Miked0801 - Fri Oct 23, 2009 6:29 pm

Custom compression. Gets results similiar to PUCrunch, but runs much faster. And hell yes you need to compress sprites. There is little on our games that is left uncompressed - just some lookup info, the occasional data header, and the very very occasional graphic that, due to performance concerns, need be left alone. BGs are compressed, sound is compressed, text is compressed, hell I believe even our code is compressed. You get the idea. If we had larger carts, we would probably still look into compression to get data over the cartridge interface bottleneck quicker.

Lossless compression BTW. For some reason, our artists don't like it much when we use lossy compressors :)

Still for homebrew, this is probably not quite as much a concern. R4s are pretty darn big after all.

#170857 - sajiimori - Fri Oct 23, 2009 7:20 pm

BTW, in case it's not obvious by now, Mike's talking about 2D assets and I was talking about 3D-related assets. We don't use much compression at all for 3D assets, like models, textures, animations, and BSP trees.

Most of our 3D-related formats already have naturally compact representations that don't compress very well, and besides, we're usually not as short on cart space for 3D games because skeletal animations are much more compact than pixel animations. The cost in memory (for an extra buffer for the compressed data) -- and to a lesser extent CPU (for decompressing) -- typically outweighs the benefit.

#170859 - Dwedit - Sat Oct 24, 2009 12:39 am

Do you use anything like Edgebreaker? It works very well for compressing 3D meshes.
_________________
"We are merely sprites that dance at the beck and call of our button pressing overlord."

#170862 - Miked0801 - Sat Oct 24, 2009 2:35 am

Yet... I've investigated compressing textures any number of times already - especially texture animations. There is pay dirt here that we've yet to really dig into.

Same for vertex animations, but to a much lesser extent.

#170906 - brave_orakio - Mon Oct 26, 2009 2:18 am

Thanks friend! How many % of space can you save with your custom compression? I might mess with BG and sound compression(When I get around to implementing sound) but probably not with sprites.
_________________
help me

#170913 - Miked0801 - Mon Oct 26, 2009 6:13 am

BGs will compress 2-3 to 1, depending on how they are created. Hand created, 8-bit stuff will compress better than dithered, lower resolution stuff. Sound compression is part and parcel of the libraries you use and I can't guess to much on this but to say that the better the compression, the worse it will sound.

Sprites will compress in the same ballpark as well, depending on frame size and how it is drawn.

#170915 - brave_orakio - Mon Oct 26, 2009 6:54 am

That is huge! no wonder in the days of the SNES and Genesis, they could fit in relatively huge games in a max of 3 - 4MB if I remember right! What takes up the most space though? I'm thinking BG and Music? If sprites take a huge chunk, I really need to rethink a few things!

Another thing, who's the fella here who made Anguna? How big were the resources compressed and uncompressed?
_________________
help me

#170917 - sverx - Mon Oct 26, 2009 10:18 am

Miked0801 wrote:
Sound compression is part and parcel of the libraries you use and I can't guess to much on this but to say that the better the compression, the worse it will sound.


In MMLL they're (we are? ;) ) using gzip'ed XMs... the size of them gets reduced to ~ 60% with absolutely no loss in quality. This of course needs a decompression buffer in RAM at run time, though.

#170919 - kusma - Mon Oct 26, 2009 4:17 pm

sverx wrote:
In MMLL they're (we are? ;) ) using gzip'ed XMs... the size of them gets reduced to ~ 60% with absolutely no loss in quality. This of course needs a decompression buffer in RAM at run time, though.

It should also be possible to keep pattern data compressed while playing, if selecting the compression scheme carefully. We've got such a compression-scheme planned for Pimpmobile (but I'm unsure if it will ever be implemented - pimpmobile development have stopped up for a while now, due to lack of feedback). Does the XM-player in this project support such a feature?

#170920 - gauauu - Mon Oct 26, 2009 4:46 pm

brave_orakio wrote:
Another thing, who's the fella here who made Anguna? How big were the resources compressed and uncompressed?


That'd be me. I didn't compress the resources (other than combining map data into 4-tile metatiles), as it was small enough without it.

The total breakdown, for the gba version, was something around:

Total game size: 1700k

All non-enemy gfx: 190k
Level/Map data: 195k
Enemy gfx and scripts: 350k
Audio data: 850k

(that's rough estimates of the final sizes based on glancing at the map file)

I had debated compressing stuff, but for a project of that size, just never seemed to be worth it. The audio was obviously the most bulky, which is sad as I only had 3 songs. (but one of them wasn't really designed for GBA and thus had huge samples and wasn't very small).

#170922 - Ruben - Mon Oct 26, 2009 5:39 pm

Weeeeell, if this is for the DS, the sound data size can be halved thanks to IMA-ADPCM. The downside is that you have to align your samples to words (including loop points).
But if this is for the GBA, you can contact me and I'll see what I can do. (Right now, I use a LOT of samples and only take up ~470KB)

#170923 - Miked0801 - Mon Oct 26, 2009 6:59 pm

Yes, you can use lossless compression on sound, but the really good compression is achieved by down sampling and/or turning up quantizing so that the data becomes more lossy.

Onto what takes up the most space, it depends on the project. In general, sound and music will eat the most space. After that, sprites usually come in at #2 and bgs in 3rd. But, a picture heavy find'em game like Mystery Case Files would probably go much higher in BG space. Same for Huge scrolling maps that dynamically load in their chars. And in some games, data and text files will take a huge chunk of space. It all depends on what you are doing.

A good rule of thumb to see how well a game is using compression is to zip the binary file. If zip doesn't cut it way down in size, the game is using lots of compression. My current game when compressed by zip actually increases a touch in size. Tells me there ain't that much more space to be had :)

#170924 - Ruben - Mon Oct 26, 2009 8:29 pm

Ah yes, dowsampling. You may be able to halve the sample rate with no real cost on quality, while halving the sound data size. IMO, what takes up the most space is sound data, followed by BGs, maps, sprites, and code.

#170932 - brave_orakio - Tue Oct 27, 2009 2:37 am

Quote:
I didn't compress the resources


Hahaha! I guess we can get away with that since we technically have up to 32MB of space! Great job on the game though! I didn't expect sound to take that much more space than the rest of the resources though. Seems like sound compression will be very important in the future.

Sound compression seems to be a little complicated though. Lossy compression means more processing power reserved for game logic and other things but less fidelity. I wonder if the SNES and Genesis took a hit on the processor with sound? They do have dedicated chips for sound so I would think only minimal?

I reviewed my current design last night and I figured I can probably minimize dynamic decompressing and loading by decompressing a huge chunk of resources to EWRAM. EWRAM is pretty big and can probably store around 2-3 32x32(5 directions) sets of uncompressed character sprites and probably some BG character tiles as well. Good thing I built my map maker to take on 4bit character tiles. That also minimizes my BG character tiles as well.

I doubt that local variables in functions will require more than 25K from EWRAM so usually there will be a lot of free space from EWRAM no?
_________________
help me

#170933 - Ruben - Tue Oct 27, 2009 3:22 am

Quote:
I wonder if the SNES and Genesis took a hit on the processor with sound? They do have dedicated chips for sound so I would think only minimal?

I'm not sure about the Genesis, but I know the SNES had a separate audio chip named SPC700 if I remember right. It was made by Sony, and WAY ahead of its time (which is a good thing).

#170945 - sverx - Tue Oct 27, 2009 3:23 pm

kusma wrote:
It should also be possible to keep pattern data compressed while playing, if selecting the compression scheme carefully.[...] Does the XM-player in this project support such a feature?


Actually it doesn't. But usually patterns are just few KBs so I don't think this is a fundamental requirement on a DS. IMHO of course.

#170946 - sverx - Tue Oct 27, 2009 3:26 pm

Ruben wrote:
Weeeeell, if this is for the DS, the sound data size can be halved thanks to IMA-ADPCM. The downside is that you have to align your samples to words (including loop points).


The same alignment is required for PCM8 and PCM16 samples too.

#170958 - Exophase - Tue Oct 27, 2009 6:21 pm

Ruben wrote:
I'm not sure about the Genesis, but I know the SNES had a separate audio chip named SPC700 if I remember right. It was made by Sony, and WAY ahead of its time (which is a good thing).


Genesis has an FM synth and a CPU normally dedicated to sound, giving it a work distribution setup similar to SNES's. I always felt that having little audio acceleration outside of the original Gameboy capabilities was a major weakness for GBA. A lot of games, ports especially, sounded weak because the programmers weren't writing a strong audio engine.

In theory GBA has enough CPU to do pretty good quality audio - if you don't need to use most of it for something else like 3D - but even then the results are still limited to 8bit. Most GBA games use very low sample rates like 10-15KHz too. There are some notable exceptions though. I think Golden Sun 2 pushes it the hardest, putting out a staggering 64KHz audio stream. But it still doesn't sound very competitive with good soundtracks on SNES, at least not to me, and Motoi Sakuraba is one of my favorite composers.

Nintendo was kinda lazy with audio for a while after SNES, where Sony did the work for them. N64 also lacked any real audio acceleration. This was partially made up for by having library code on the RSP coprocessor handle it but that ate away from your geometry transform time. PS1, on the other hand, got an audio chip that was a lot like an upgrade to SNES's DSP and IMO got much better music because of it.

#170976 - Ruben - Tue Oct 27, 2009 11:04 pm

Well yeah, but the smaller the size, the more strict: ADPCM must be 8-samples aligned, PCM8 must be 4-samples, and PCM16 2-samples. I just meant that the more compressed the stricter.

#170990 - brave_orakio - Wed Oct 28, 2009 7:05 am

What is 8-bit and 16-bit Differential Compression? I can't seem to find anything concrete about this type of compression. Or is it known by another name? Oh, and a tutorial about how I would go about doing my own custom compression would be nice as well!
_________________
help me

#170991 - Ruben - Wed Oct 28, 2009 7:33 am

Well, 8-bit data is just that: One byte for each sound 'wave'. 16-bit is two bytes that give the sample a higher precision range. ADPCM is 4-bit, but is rather neat: Instead of having a 4-bit sample 'wave' range, the 4-bit value is a step value for the next sample index which is then used to lookup from a sample table, giving it a fair amount of accuracy, which is higher than 8-bit, but lower than 16-bit.

And about custom compression: It really depends on what you're doing. For example, data that has huge 'chunks' which have exact same values could use RLE compression, while others that have repetitive data may benefit from LZ77. Therefore when choosing your own compression, you have to take into account whether it can be lossy, the size/CPU load ratio and how to pack/unpack.

#170993 - keldon - Wed Oct 28, 2009 8:56 am

brave_orakio wrote:
What is 8-bit and 16-bit Differential Compression? I can't seem to find anything concrete about this type of compression. Or is it known by another name? Oh, and a tutorial about how I would go about doing my own custom compression would be nice as well!


It consists of having a palette defined for a chunk of audio. IIRC you update (or can update) the palette with each chunk - something like that.

#170997 - brave_orakio - Wed Oct 28, 2009 9:25 am

One more thing about compression. Should the binary file to be compressed already be understandable by the GBA?

Meaning if the binary file is a 32x32 palette indexed image, the image should be sequenced in 8x8 tile blocks and bytes sequenced in little endian?

Dunno if the little endian thing made sense but when I made my own image editing tool and added a function to export the image/s to a C file(As an array of unsigned int) I had to reverse each entry per byte (Meaning 0xFFAB1020 will become 0x2010ABFF if I remember the process correctly) to get the correct image onscreen.
_________________
help me

#170998 - Ruben - Wed Oct 28, 2009 9:29 am

"the image should be sequenced in 8x8 tile blocks"

Yes, this would allow data to be loaded faster by avoiding unfiltering and stuff.

"bytes sequenced in little endian"

Err, bytes can't be little/big endian ;)
But yes, all data must be little endian. (That is, big endian 0x00112233 will be 0x33221100 in little endian and big endian 0x0011 will be 0x1100).

#171001 - brave_orakio - Wed Oct 28, 2009 10:11 am

Quote:
"bytes sequenced in little endian"


Hahaha! Yes, my mistake! I probably worded that wrong.

Reason I'm asking is I might use the bios calls for decompressing so I have no idea what goes on in there. I can only assume that the bios call doesn't resequence the data to 8x8 tiles or convert it to little endian. It just converts them straight.

So 8-bit and 16-bit differential and ADPCM is mostly used for sound?
_________________
help me

#171002 - Ruben - Wed Oct 28, 2009 10:22 am

On the GBA, you kind of *have* to use 8-bit sound (you could theoretically use any other sample type, but it's basically pointless for the next reason) as the GBA's sound depth is only 8 bits. On the DS though, you can use 16-bit, ADPCM and 8-bit, but I daresay 99% of the time, official titles only use ADPCM.

And AFAIK, the BIOS simply decompresses data from one place to another, without altering the formatting, so yes, you have to format the data yourself using something like grit or Usenti.

For the GBA, I always use 8-bit samples and my own sound mixer. For the DS (which I barely code for, mind you =P), I mainly use ADPCM samples, but sometimes use 16-bit samples for short sounds like a bass sample and make use of the hardware channels.

#171040 - brave_orakio - Thu Oct 29, 2009 2:14 am

All right lets see if I got the sequence right with compression


First I have my 32x32 palette indexed image/s. I convert the byte/hword/word sequence to 8x8 tiles.

then I compress this using whatever compression I choose.

Lastly, if I were to convert this to an array of unsigned int(or even binary really) I have to convert it into little endian.

Is this correct or do I have to insert a "convert to little endian after image is resequenced to 8x8 tiles" before compression?

Oh and can anybody point me to a tutorial on how to implement lZ77 compression for the GBA? I would like to add compression to my tools as well.
As far as I can tell looking at gbatek the window size is 16 bytes? And what are MSBs and LSBs? The only thing that comes to mind is most/least significant bits?
_________________
help me


Last edited by brave_orakio on Thu Oct 29, 2009 3:21 am; edited 2 times in total

#171041 - Ruben - Thu Oct 29, 2009 2:48 am

Well, usually you'd want to finish your image, remove duplicate tiles, make map, export to a raw file using little-endian format (all of which can be done using Usenti using the Export function), compress said file into another file and link this to your binary.

#171043 - Miked0801 - Thu Oct 29, 2009 3:30 am

Differential compression is a filter and not a compression technique per se. Imagine you have a meta tile map that has indexes for each tile on the map. Let's also assume that for now, each tile is unique. The data would look like:

0,1,2,3,4,5,6,7...

RLE and LZ77 compression can do nothing with this type of data. What to do? Take byte N and subtract from N+1 and store the result:

0, -1, -1, -1,-1,-1,-1... And behold, data that compresses really really well. Easily reversible as well.

There are other filters that are used in similar fashions. Discrete Cosine Transform and its inverse are used in JPEG. Barrow/Wheeler transform in more advanced routines, or any other pattern you can think of that will make data more compressible.

Just store the compression type and filter in a header and you have a custom compression system.

#171045 - brave_orakio - Thu Oct 29, 2009 4:09 am

Huh, I see. A little at least. I'll have to study that eventually too.

A question about GBA LZSS(not LZ77!) why is the disp(displace position?) bits cut in half like that?

Also tell me if I got this right:
Block Type 0: No match
Block Type 1: at least 3 characters long for a match. maximum of 16
(this is looking at GBATEK) a maximum window size of 12 bits(4096 at the top of my head) with value 2^11 at bit 0, 2^10 at bit 1...

Is this correct?
_________________
help me

#171067 - Miked0801 - Thu Oct 29, 2009 10:55 pm

Not really up to speed on GBA LZ decompressor at this point, but what you are talking about sound right.

Basically, it takes 2 bytes to store the 4-bit length and 12-bit lookback window. My guess is that the byte count is actually 3-19 (+3) and not 3-16 which wastes a bit of space.

The no match would say how many bytes of the next set are compressed or not.

The next step is to combine rle and lz into a single code with a bit to tell which way to decompress. Then, if you want, you can start modifying the lookback window size depending on how many bytes you've read in (can't look back 4k when you've only read in 32 bytes.) Then, perhaps you can also modify the window itself to be less than 2 bytes, thus allowing you to compress 2-byte groups (LZ or RLE). Then there are ways to encode number of bytes and length in a single code (Elias codes) to allow dynamic length management. How about a huffman style table for simple codes thrown in for good measure. Doing all of the above and you have a fairly good compressor that is somwhat slow to decompress due to it reading in bits instead of bytes/half words.

#171070 - brave_orakio - Thu Oct 29, 2009 11:33 pm

Thanks! I guess a slow decompressor would be ok for me since I won't be decompressing on the fly, only semi-dynamic(If I can call it that) since I'll be using EWRAM for frame buffer.

Looks like I got my work cut out for me!
_________________
help me

#171078 - Miked0801 - Fri Oct 30, 2009 5:57 pm

Google PUCrunch and be amazed at how well the author describes the process.

#171079 - Dwedit - Fri Oct 30, 2009 7:04 pm

Pucrunch's weakness is its dependency on bit by bit reading. Aplib corrected this by making it so whenever you need to read a bit, it reads an entire byte out of the input stream, so the rest of the bytes can be aligned.
_________________
"We are merely sprites that dance at the beck and call of our button pressing overlord."

#171088 - Miked0801 - Sat Oct 31, 2009 5:22 am

Yep, that's why it's slow. But, the ideas are still real good.

#171127 - brave_orakio - Mon Nov 02, 2009 2:41 am

A question about LZSS though since I couldn't find anything about this anywhere. if I have something like this in midway in the encoding process:

AACD(23, 4)FE then later on I find a string ACDFE.
Will it become something like this (Offset, 5) or will it be (Offset, 3)FE?
_________________
help me

#171129 - Miked0801 - Mon Nov 02, 2009 4:40 am

The first one. If you watch how it works, a string like AAAAAAAA will compress (uncompress)A,1 then (compress) (offset) 0, length 7.

#171204 - brave_orakio - Fri Nov 06, 2009 3:58 am

Hey guys, what should the header look like when in WORD form?

with 3072 bytes uncompressed, my header looks like this

Code:
0x000C0010


little endian thing is really messing up head. Also messing up my head is the MSB and LSB thing written in GBATEK. heh.
_________________
help me

#171206 - keldon - Fri Nov 06, 2009 9:24 am

Little endian = little end first. Our number system is big endian, since we write the big end first (100 as opposed to 001). 0x000c0010 (which is written in big endian form) when stored in little endian would have the following memory byte arrangement:
- [0] 10
- [1] 00
- [2] 0c
- [3] 00

The least significant bit is the bit with the lowest value. So for the number 0x01 (binary 00000001), the least significant bit is 1. Or for the number 0x80, (binary 10000000) the most significant bit is 1.

#171360 - brave_orakio - Mon Nov 16, 2009 5:14 am

Heh, finally got LZSS working.

Wow, I was able to compress from an original u32[768] size it became u32[314]. Almost 60% compresion rate!

For VRAM safe compression, I can only assume that compressed and uncompressed bytes needs to be in factors of 2? Since only 16 bit copies are allowed in VRAM.
_________________
help me