#57471 - phirewind - Sun Oct 16, 2005 7:27 am
This is sort of a big question, so please forgive me for the long post...
I'm fleshing out a project that is essentially an e-book delivery system on the GBA (and eventually DS so I can use stylus input), and am trying to decide on a few key issues. I have several different options for each, and was hoping for some feedback from those of you with more expertise in the performance characteristics of the hardware.
1. I'd want to deliver mass amounts of text data, such as a combination of reference manuals, and I could easily see the upper limits of a 256Mbit cart being thought of as a real constraint on the system in some extreme cases. My first real target test document is 40 Mbit in raw text form. What would the best-case scenario be for storage? A conversion to "const unsigned char" files like GFX2GBA output? That seems like the best choice to keep it out of RAM (since it won't have to be streamed in large chunks for a simple display utility), but then the question comes up as to how to manage that information. What is the limit to how many items can be in a const unsigned char array? If it's a 32-bit address, then each document could be stored as one huge hunk since it cant get near 4 terabytes, and another const array generated as an index for the multiple levels of referencing (chapter, section, subject, whatever). Then a document could be stored and indexed something like this...
Simply put, if you had to deliver 5 million characters for a document organized in 3 to 5 levels, how would you store and access it? Am I proposing something even feasible? Ok, maybe not so simple...
2. Displaying (an easier question). Since this is strictly a text-display system, I'll be pre-rendering a non-mono-spaced font at 12, 18, and 24pt sizes, possibly using a real-time sub-pixel method to make the fonts come out nice and smooth on the GBA or DS LCD. My best options seem to be:
a) blit the character images to a Mode 3 or 4 background, use no sprites.
b) use a sprite per each character, and count scanlines or hblanks to write a new set of sprites into OAM for each line of text.
From my photoshop mockups, the maximum output rate would be at 12pt font, using 12 scanlines per line of text, with a reasonable max of about 40 characters per line. It seems to me that if I use the sprites + hblank method, I'd have to re-render the sprites every time for every line of text since I'd be using up to 11 lines of text on the screen. If I used a real-time sub-pixel font smoother, will I be able to render 40 characters in under 15,000 cycles? I think I just talked myself into a manual blitting method in mode 4... that way as the text is scrolling, even with sub-pixel smoothing, I'd only have to re-render a couple of scanlines per vblank, and just copy the rest between buffers with the proper offset. Then I'd only have to re-render the entire screen with the font smoothing if the text was scrolled by page or tabbed to entirely different locations.
3. Searching the text. I don't suppose anyone here has ever done this on a GBA... text comparisons? I guess I could always just brute-force it, but I was hoping for a way to allow the user to searc for a word or precise phrase through part or all of the document. I shudder to think how long it would take to search through a 40 Mbit data structure, but maybe I'm underestimating the speed of the media. Any ideas on how to speed up any part of that process?
Well, I may have answered my own display question, but any insight to that, the storage method, or the search routine would be greatly appreciated. Once I get the rendering method and data structure in place, this is a fairly simple application. Heh, it may take more time to write the tools to convert some of the doucments into the proper format than it will to write the GBA application to view them. Ah well, I can at least have fun with some interesting code for a little while.
I'm fleshing out a project that is essentially an e-book delivery system on the GBA (and eventually DS so I can use stylus input), and am trying to decide on a few key issues. I have several different options for each, and was hoping for some feedback from those of you with more expertise in the performance characteristics of the hardware.
1. I'd want to deliver mass amounts of text data, such as a combination of reference manuals, and I could easily see the upper limits of a 256Mbit cart being thought of as a real constraint on the system in some extreme cases. My first real target test document is 40 Mbit in raw text form. What would the best-case scenario be for storage? A conversion to "const unsigned char" files like GFX2GBA output? That seems like the best choice to keep it out of RAM (since it won't have to be streamed in large chunks for a simple display utility), but then the question comes up as to how to manage that information. What is the limit to how many items can be in a const unsigned char array? If it's a 32-bit address, then each document could be stored as one huge hunk since it cant get near 4 terabytes, and another const array generated as an index for the multiple levels of referencing (chapter, section, subject, whatever). Then a document could be stored and indexed something like this...
Code: |
#define DEPTH 3 // number of nesting levels in the structure const u32 elemcount[DEPTH] = { 25,89,520 } // 25 chapters, 89 sections, 520 subjects // store 6 u32's for each element. // 1. index of the first child element, // 2. number of children // 3. byte position in raw data // 4. length of element in bytes // 5. byte position of in the name array // 6. length of element name const u32 eleminfo[] = { 1, 25, 0, 4678301, 0, 17 // the total book. first chapter is element 1, 25 chapters, starts at byte 0, 4678301 total bytes, name at 0, 17-byte name 27, 4, 0, 184235, 18, 12// chapter 1, first section is eleminfo 27, 4 sections in chapter, start at byte 0, lasts 184235 bytes, name at 18, 12-byte name // etc. }; const unsigned char elemnames[] = { T,h,e, ,B,o,o,k, ,o,f, ,S,t,u,f,f, I,n,t,r,o,d,u,c,t,i,o,n, // etc. }; const unsigned char mybook[4678301] = { // the raw data M,a,r,y, ,h,a,d, ,a, ,l,i,t,t,l,e, ,c,o,d,e,r, // etc., for 4.6 million characters }; // actually uses 2x characters for the data because of the commas, so 5 million character data takes 10 million characters to store... |
Simply put, if you had to deliver 5 million characters for a document organized in 3 to 5 levels, how would you store and access it? Am I proposing something even feasible? Ok, maybe not so simple...
2. Displaying (an easier question). Since this is strictly a text-display system, I'll be pre-rendering a non-mono-spaced font at 12, 18, and 24pt sizes, possibly using a real-time sub-pixel method to make the fonts come out nice and smooth on the GBA or DS LCD. My best options seem to be:
a) blit the character images to a Mode 3 or 4 background, use no sprites.
b) use a sprite per each character, and count scanlines or hblanks to write a new set of sprites into OAM for each line of text.
From my photoshop mockups, the maximum output rate would be at 12pt font, using 12 scanlines per line of text, with a reasonable max of about 40 characters per line. It seems to me that if I use the sprites + hblank method, I'd have to re-render the sprites every time for every line of text since I'd be using up to 11 lines of text on the screen. If I used a real-time sub-pixel font smoother, will I be able to render 40 characters in under 15,000 cycles? I think I just talked myself into a manual blitting method in mode 4... that way as the text is scrolling, even with sub-pixel smoothing, I'd only have to re-render a couple of scanlines per vblank, and just copy the rest between buffers with the proper offset. Then I'd only have to re-render the entire screen with the font smoothing if the text was scrolled by page or tabbed to entirely different locations.
3. Searching the text. I don't suppose anyone here has ever done this on a GBA... text comparisons? I guess I could always just brute-force it, but I was hoping for a way to allow the user to searc for a word or precise phrase through part or all of the document. I shudder to think how long it would take to search through a 40 Mbit data structure, but maybe I'm underestimating the speed of the media. Any ideas on how to speed up any part of that process?
Well, I may have answered my own display question, but any insight to that, the storage method, or the search routine would be greatly appreciated. Once I get the rendering method and data structure in place, this is a fairly simple application. Heh, it may take more time to write the tools to convert some of the doucments into the proper format than it will to write the GBA application to view them. Ah well, I can at least have fun with some interesting code for a little while.