gbadev.org forum archive

This is a read-only mirror of the content originally found on forum.gbadev.org (now offline), salvaged from Wayback machine copies. A new forum can be found here.

DS development > Using capture to create double polygon potential at 30FPS?

#145814 - DiscoStew - Thu Nov 22, 2007 11:16 pm

It is just a thought that came to me the other day, but using the capture method used in allowing 3D on both screens, couldn't that also be a way to double the polygons on a single screen? Sort everything to get rendered so everything in the back portion gets captured, and render the remaining which would be the front portion on the 3D layer, and the captured layer displayed behind it.

Now, I know there are potential problems associated with this, such as anything rendered going through the boundary of the front and back portion, but in circumstances where that isn't a problem, couldn't it work?
_________________
DS - It's all about DiscoStew

#145815 - NeX - Thu Nov 22, 2007 11:18 pm

Yes, and I can imagine it being used on fighting games such as Soul Calibur DS...
_________________
Strummer or Drummer?.
Or maybe you would rather play with sand? Sandscape is for you in that case.

#145832 - jetboy - Fri Nov 23, 2007 1:35 pm

DiscoStew wrote:
Now, I know there are potential problems associated with this, such as anything rendered going through the boundary of the front and back portion, but in circumstances where that isn't a problem, couldn't it work?


Not only that would work, but itsd used in amy comercial games. Lets get Nintendogs for example. Background is rendered in one pass, while dogs and toys are rendered in the other.
_________________
Colors! gallery -> http://colors.collectingsmiles.com
Any questions? Try http://colors.collectingsmiles.com/faq.php first, or official forums http://forum.brombra.net

#145833 - a128 - Fri Nov 23, 2007 2:58 pm

jetboy wrote:

Lets get Nintendogs for example. Background is rendered in one pass, while dogs and toys are rendered in the other.

Why you think this game uses this technique?
Just because the dogs does not look like low-poly models ?

#145834 - simonjhall - Fri Nov 23, 2007 3:06 pm

You can only do front-back compositing, right? You don't get any Z out do you? It's be nice to be able to render half a scene, capture everything (including depth) and restarting rendering.
_________________
Big thanks to everyone who donated for Quake2

#145835 - Mighty Max - Fri Nov 23, 2007 3:35 pm

Well i could think of a complete z-aware (5 bit) method to rendering the double amount of polygons and vertices in 4 passes (15fps)

- Split the scene into two randomly chosen partitions (A,B)
- render and capture A as is
- render and capture A with no lighting on, fog enabled in white , all colors set to blac
- render and capture B as is
- render and capture B with no lighting on, fog enabled in white , all colors set to blac
- merge the two full color images, chose source of each pixel by the "whiteness" of the fog (=depth of the pixel)

While it is possible, it is very slow and memory consuming (needs to store the 4x the screen in wram, and at least once - the last completed - to be displyed while rendering in the background in vram). So beside of a demo to render more polygons, i don't see much use :D
_________________
GBAMP Multiboot

#145836 - DiscoStew - Fri Nov 23, 2007 3:48 pm

Mighty Max wrote:
Well i could think of a complete z-aware (5 bit) method to rendering the double amount of polygons and vertices in 4 passes (15fps)

- Split the scene into two randomly chosen partitions (A,B)
- render and capture A as is
- render and capture A with no lighting on, fog enabled in white , all colors set to blac
- render and capture B as is
- render and capture B with no lighting on, fog enabled in white , all colors set to blac
- merge the two full color images, chose source of each pixel by the "whiteness" of the fog (=depth of the pixel)

While it is possible, it is very slow and memory consuming (needs to store the 4x the screen in wram, and at least once - the last completed - to be displyed while rendering in the background in vram). So beside of a demo to render more polygons, i don't see much use :D


I thought I read up in the GBATEK about the rear plane using 2 bitmaps, one that had the image data, and one that had the depth data. Then again, if I read correctly, using that specific method also locks out the VRAM from using textures. I could be wrong though. Maybe it only locks out banks 2 and 3.
_________________
DS - It's all about DiscoStew

#145837 - Mighty Max - Fri Nov 23, 2007 3:55 pm

Oh, ok this info was new to me, so you can drop the manual z-compare&source selection, so only 3 passes needed:

- rendering A full color (A.colors)
- rendering A in order to receive a depthmap (A.depth)
- rendering B with A.colors and A.depth set in the rear pane

The loss of texture memory can partly be recovered again, by splitting the scene by textures, so each scene partition does not need the full set of textures.
_________________
GBAMP Multiboot

#145860 - tepples - Sat Nov 24, 2007 12:38 am

Mighty Max wrote:
The loss of texture memory can partly be recovered again, by splitting the scene by textures, so each scene partition does not need the full set of textures.

But how much texture data can you copy to the VRAM during a single vblank? If I recall correctly, it's like the NES or 8-bit Game Boy where you get a narrow time window, not like the GBA or DS 2D where you get long enough for it not to matter.
_________________
-- Where is he?
-- Who?
-- You know, the human.
-- I think he moved to Tilwick.

#145862 - Mighty Max - Sat Nov 24, 2007 1:03 am

Well there is a pass which only uses black triangles to calculate z-values, allowing you to access the texture mem.

Alternating A und B passes, reduces the amount of texture exchanges:

Render A -> Render Z-Buffer & exchange textures -> Render B (complete Frame 1)
Render B -> Render Z-Buffer & exchange textures -> Render A (complete Frame 2)
_________________
GBAMP Multiboot

#145889 - M3d10n - Sat Nov 24, 2007 11:05 pm

Just change the near and far clip planes for each pass and turn on the clipping. You won't need to care about depth that way, the trade off being you'll have a few extra polygons on geometry crossing the boundaries.

Also, while you can upload your own z-buffer from VRAM, there seems to be no way to read the Z-buffer, because there isn't one.

Well, actually there is one, but it's dimensions are 256 x 1 and it's re-used for each scanline.

#145891 - Mighty Max - Sat Nov 24, 2007 11:32 pm

M3d10n wrote:
Just change the near and far clip planes for each pass and turn on the clipping. You won't need to care about depth that way, the trade off being you'll have a few extra polygons on geometry crossing the boundaries.


It's indeed another great method, however this is very scene dependend, as you need some way to determinate a good cutting plane. Otherwise you might end up with hundrets of splitted polygons, or possibly a cut that seperates only few polygons from the whole.

Quote:

Also, while you can upload your own z-buffer from VRAM, there seems to be no way to read the Z-buffer, because there isn't one.


While it is true as in the form of a buffer, it isn't for the z-values. That is why i suggested the fog method: Fog-"Density is linear interpolated for pixels that are between two Density depth boundaries." (gbatek)
_________________
GBAMP Multiboot

#145942 - nce - Mon Nov 26, 2007 1:01 am

Mighty Max wrote:
Fog-"Density is linear interpolated for pixels that are between two Density depth boundaries." (gbatek)


Only probleme is the z plane that you can define is not a linear Z

When I did some test to render a scene in 3dsmax and use it, I had to wrote my z buffer as :

Code:

local color =  pix.zdepth  -- the z depht of my pixel ( in linear )
local far = 40960.0
local near = 4096.0
color = ((far+near)/(far-near)) + ( (1.0/color) * ((-2 * far * near)/(far-near)) )
color =  ( color + 1.0 ) / 2.0  -- -1 1 to 0 1

//because the depth map is saved in 14 bit and not in 24 you have to convert it

--write data
color *= 16777215.0  --24bit
--bring back to 14bit
color *= 32768.0
color -= 511.0
color /= ( 32768.0*512.0 ) + 511.0
WriteShort file ( ( color ) as Integer)


of course your far and near have to relefect the one you used when setting your projection matrix.

and if the convertion from 24 to 14 looks strange it's because of the gbatek doc ( look at the rear plane data ) :

The 15bit Depth is expanded to 24bit as "X=(X*200h)+((X+1)/8000h)*1FFh".
_________________
-jerome-

#145953 - a128 - Mon Nov 26, 2007 9:14 am

Could someone post a demo of the technique?

#145955 - Mighty Max - Mon Nov 26, 2007 9:35 am

nce wrote:
Mighty Max wrote:
Fog-"Density is linear interpolated for pixels that are between two Density depth boundaries." (gbatek)


Only probleme is the z plane that you can define is not a linear Z


This isn't any problem at all
I am not recreating a z value by linear means, but i am mapping the z by linear means.

And if the DS caclulates the fog as Density = a + b*z, with a beeing the offset and b the scaling, z can be recovered by z = (Density - a) / b . As you see the function that creates z isn't needed here. So you can choose a logarithmic scale, a look-up tabel or even randomly setting z as the base z-function.

If you choose the fog setup clever so that a becomes 0, b becomes 1, anything that is left is Density = 0 + 1*z = z . Done.

a128 wrote:
Could someone post a demo of the technique?


If noone has done it by next weeks Wednesday, i'll do it. For now, i have some work for the campus left i desperately try to hide from, but can't anymore.
_________________
GBAMP Multiboot

#146140 - RickA - Thu Nov 29, 2007 4:08 pm

What we did when we were playing around with it was splitting the scene in left/right parts rather than front/back. Then you wouldn't need do keep the depthbuffer around as you don't need it again. Naturally this will cost you some triangles, and possibly interpolation artifacts on the boundary where the polygons are clipped.

#146195 - S7ARBVCK - Fri Nov 30, 2007 10:45 am

Sorry - am I missing something here? If your scene is rendered back to front, you can just render 2000 poly's, grab the scene, blit it as a background and render the forward half of the scene over the top of this and it'll look fine, surely?

#146205 - tepples - Fri Nov 30, 2007 1:47 pm

The front/back technique can be yours if the price is right, but it's not always easy. The trouble comes when you have a camera below the ceiling, which tends to rotate more than a camera that's positioned high above the scene. After numerous camera rotations, it is often difficult to find the best boundary to split the scene so that the back half comes closest to 2000 polygons without going over and without drawing anything that should be in the front half.
_________________
-- Where is he?
-- Who?
-- You know, the human.
-- I think he moved to Tilwick.

#146207 - Mighty Max - Fri Nov 30, 2007 2:29 pm

Front to back rendering works perfectly for static scenes (walking through the BSP)

If you include dynamic elements you'd need dynamic resorting and splitting of the polygons, which is very heavy on calculations. If you can't afford it (i.e. because there is no FPU) you have to maintain the z-informations on already drawn pixels.
_________________
GBAMP Multiboot