gbadev.org forum archive

This is a read-only mirror of the content originally found on forum.gbadev.org (now offline), salvaged from Wayback machine copies. A new forum can be found here.

DS development > Mixing 2D and 3D via Painter's Algorithm?

#167756 - DiscoStew - Thu Mar 26, 2009 5:32 am

Is it possible on the DS? For those that don't know what I mean, I mean where you basically draw the 2D or 3D elements one after another, and the element that was drawn last will cover the rest. It's kinda like layers, but specifically dealing with both 2D and 3D at the same time.

I know the DS has the hardware to do both, but they are separate, where you have 2D BGs/OBJs, and you have 3D that is placed only on BG0. In some circumstances, this would be fine when there are 2D elements only in front and behind all the 3D elements, but not when you want a 2D element to be placed in between two 3D elements. This is the kind of effect I was thinking of for a "dream project" (that I'd like to start one day if I ever got the chance and the time).

For a better description of what I'm talking about, think of the PSX Final Fantasy games, where the areas are pre-rendered, but work quite well with the inclusion of actual 3D elements.

The only way I can think of getting this particular effect is if all the 2D elements that can be between different 3D elements were actually part of the 3D elements themselves, being quads. But, not only can I not think of how to manipulate those to act like 2D elements like the hardware BGs/OBJs within a 3D scene, but they'd be affected by z-buffering, and what I'm looking for is more like what I said, a Painter's Algorithm, where the order they are sent in determines their visibility against the rest.

Would anyone happen to know how this could be done?
_________________
DS - It's all about DiscoStew

#167762 - TwentySeven - Thu Mar 26, 2009 11:45 am

(Disclaimer, I'd have to go read up again to be 100% sure)

Apparently you can set your own zbuffer up as a texture on the DS

This is then used by the hardware to zbuffer against.

So you could use a prerendered scene and zbuffer, and then have the live geometry ztest correctly against it.

#167783 - Echo49 - Fri Mar 27, 2009 10:46 am

How do you go about setting up your own zbuffer?

#167792 - DiscoStew - Fri Mar 27, 2009 6:05 pm

lol, I completely forgot about the Rear Plane functionality. That might work, so I'll keep it under my list of options if I ever get going on the project.

Echo49,

http://nocash.emubase.de/gbatek.htm#ds3drearplane

You've got 2 bitmaps, one for the color data, and one for depth. When enabled, it will take that depth data, and it will be worked against the tris/quads you send in (but not until you flush that data). You can create your own depth bitmap by filling in the correct texture slot if you have the rear plane enabled, whether by pre-set data, or taking from another source.

This particular functionality had been discussed before for increasing a rendered scene's polygon count over the 2k-tri/1.5k-quad limit. In theory, it would work (I haven't tried it out myself), but it comes with a price. It uses the capture unit, but the capture unit can only capture one thing at a time, and in order to get this particular effect, you'd have to capture a frame twice, once with white-untextured polygons with a fog set to the color black to create the depth buffer, then capture again, but with the actual scene like you'd normally create. Once those have been captured, the next frame could display those bitmaps via the Rear Plane, and what you draw next will be affected by it.

Because of having to capture a frame twice instead of once, you don't go down to 30FPS, but to 20FPS. It's unfortunate that they didn't design it so that it could capture both color and depth at the same time (unless I'm mistaken).
_________________
DS - It's all about DiscoStew

#167795 - Echo49 - Sat Mar 28, 2009 1:30 am

Can this be used to manipulate the z-index of each poly I draw without affecting it's physical position along the Z axis when drawing 2D over 3D?

Sorry, I might have misunderstood the meaning of zbuffer.

#167796 - Dwedit - Sat Mar 28, 2009 2:03 am

So the ability to create bitmaps for color and depth means that you can display unlimited polygons per frame as long as you take a hit in the frame rate. So you could then display 3 frame's worth of polygons at 12FPS?
_________________
"We are merely sprites that dance at the beck and call of our button pressing overlord."

#167797 - DiscoStew - Sat Mar 28, 2009 3:43 am

Echo49,

This approach, imo, is like taking a screenshot of what's been sent to the scene, both color and depth data, and is then used in the next set of stuff rendered to the scene to make it act like it was all rendered together once. While you could manipulate the depth values per pixel once it's been captured, you have to understand that the depth is manipulated by the "camera". Changing anything without taking into account the perspective of the "camera" would result in some really funky stuff.

Dwedit,

I can't give you a solid answer for that, but perhaps I should at least attempt to get a double-poly example worked up so it doesn't seem like I'm just BSing this all up. After that, I can see whether 3 frame's worth of polys is obtainable, but my thoughts about it makes me assume that VRAM will be quite used up to prevent it from being used for anything else, which includes no textures. :(
_________________
DS - It's all about DiscoStew