gbadev.org forum archive

This is a read-only mirror of the content originally found on forum.gbadev.org (now offline), salvaged from Wayback machine copies. A new forum can be found here.

Graphics > Planar Texture Mapping

#9109 - funkeejeffou - Mon Jul 28, 2003 4:52 pm

Hi all,

I've been coding some 3D software engines for a year and a half now, and I've always been using texture mapping algorithms relying on the vertex texture coordinates in texture space(like (u,v)=(0,0) and (u,v)=(64,64) for 64*64 bitmap).
I've recently seen some planar texture mapping algorithms wich defines for each 3D polygon a texture(bmp); s and t vectors(texture unit vectors) and distS distT(texture offsets).
Apparently, the u and v (texture coordinate in texture space) for a given vertex V would be defined like that :
u = (vectorS scalar V) + distS
v = (vectorT scalar V) + distT

I don' really understand how can these be texture coordinates as u can be equal to 1000 for a 32*32 bitmap.
Must we divide the u,v result by the length and height to have them in texture space?
What's the point of using such a system?

Some 3D Guru help me out please...

#9582 - Derek - Sat Aug 09, 2003 1:30 pm

Are you talking about quake style texture mapping? In Quake, the texture surface description was independant of the polygon. Tthe same "surface" description could be used across multiple polygons on the same BSP plane. A complex wall made from many polygons would therefore have its textures correctly aligned. Textured surfaces could be panned, resized without any polygon edges showing.

At the end of the day, a given XYZ is reverse projected to a UV and DDA texture mapped per 16 pixels.

You could also make a texture appear behind the polygon giving a neat sky effect. But, these days skyboxes are used.

Since, DirectX and OpenGL uses UV's per vertex, you will find Quake style engines will simply convert the ST data on the fly. Some cache the data at startup.

RE: GBA development. I use put DDA texture mapping for the GBA, which is limiting but fast.

http://www.theteahouse.com.au/gba/

I cant imagine a Quake style mapper would work on the GBA, although I have heard people claim they have achieved it. We will see.

#10069 - funkeejeffou - Tue Aug 26, 2003 1:52 pm

Sorry, haven't seen your reply.
Yes I'm talking about quake style texture mapping, apparently, all testures coordinates are generated on the fly. I've managed to precalculate the texture coordinates "per vertex per face", as each coordinate needs 3mul and 4 adds(too much to get on the fly), and it works. I still do not understand why carmack did this. Is it for saving space in RAM? Is it for mipmapping (as the u and v are mip-dependant)?

Since you are talking of the skyboxes, you must have seen that the texture is doubled in width? Do you know why and how to animate them correctly?
I think it is possible to port quake1 on the GBA, but it is a hard one I must admit...

Also, for collision detection, if you're familiar with quake's BSP file format, do you know if testing collision with the polygones that are in the leaf we're in is enough?
What are the faces contained in the nodes for? (They don't seem to be used in the world rendering, nor in the collision detection)

A final question :
Does anyone a way to cheat so that texture mapping seems perspective correct? I don't think the 3D engine can afford divisions per pixel (or per 8 pixel)?

#10073 - Derek - Tue Aug 26, 2003 3:16 pm

Quake used ST descriptions so the surface is independant of the polygon and its raster spans. The Quake engine achieves zero overdraw using span clipping. If you used UV's then you need to clip the UV's not only to the frustum but to each span. This would be very costly. By having the surface description independant of the polygon spans, you are able clip the polygon down to the visible spans and reverse project each screen point back onto the texture surface without having to maintain any texture information durring the rastering process.

After optimizing you end up with 2 divides per 16 pixels. Some people quote 1 divide per 16 pixels, but that actually means 1 divide and 2 multiplies which is about the same speed unless youre name is michael abrash.

This was my Quake style texture mapper I wrote 5 years ago:
http://www.theteahouse.com.au/gba/bsp.html

Online source code is here:
http://www.theteahouse.com.au/gba/bsp/files/_________TheTeaHouse_gba_bsp_room101_Texture_cpp.html

Problem is, this style of texture mapper requires floating point code which is a big no-no on the GBA. Maybe 16:16 is possible in ASM, but I cant code ARM.

So, what about perspective correct DDA texture mapping using only integers? I achieved one a year ago for a DOS portal engine.
http://www.theteahouse.com.au/gba/portal.html

Source code is here:
http://www.theteahouse.com.au/gba/portal/files/_________TheTeaHouse_gba_portal_source_engine_c.html

This code still uses a lot of 16:16 divides which cant be converted to reciprocal tables. It therefore requires some 16:16 fixed point code. In standard C you can only really use 24:8 to prevent 32bit overflows.

But, the PS1 had Quake II, and the PS1 didn't have perspective correction or z-buffers. So, im guessing they used a engine like I've been developing for the GBA. (I could be wrong, so I'd love to speak to anyone with more information on the PS1 Quake II engine)

RE: Quake. The best mini-quake engine I've seen is QMap by Sean Barrett. http://www.nothings.org/misc/bio.html

But the engine doesn't seem to be up anymore, so you can download from my site:
http://www.theteahouse.com.au/bin/qmsrc.zip
http://www.theteahouse.com.au/bin/qmexe.zip

Personally, I think the best full 3D engine for a GBA will be a Tomb Raider (One) style engine which worked well on old PC's and PS1's. Quake was really the first engine to require floating point hardware. Tomb Raider, the Build engine, DOOM etc didn't.

RE: perspective correction tricks. Again, I think the PS1 is the place to get answers. Sub-dividing polygons when they are close is the best answer IMHO. Its not just the divides that are the problem. The low resolution of the GBA means a texture mapper has to get in, do the job and get out.

But, yea, maybe its possible for some hard core ARM coders. Another place of answers is MDK and Incomming. Both had great software rendering while maintaining high speed on old PC's.

My best advice to everyone is pull out your old PC games. The GBA is following the same path.

Derek Evans

#10078 - Lupin - Tue Aug 26, 2003 3:55 pm

Derek, this is the best arm tutorial around I think:
http://k2pts.home.comcast.net/gbaguy/gbaasm.htm

Hm, I'm wondering why you worry about perspective correct texture mapping and such, I think on such an small display you won't notice the programming bads done whilest developement ^^

Using Q2 technology on gba is an good idea, I thought of using MD2+Q3 BSP because I know both file formats very good... only my math kinda sucks though (I'm just 12th grade atm)... :(

#10093 - funkeejeffou - Tue Aug 26, 2003 8:34 pm

Personnally I do not use a span buffer as it consumes too much memory for a system such as the GBA, and secondly because it is hard to implement a fully functiunal one in ASM. So I only clip my u and v coordinate on the frustum.
I do not think either that a span buffer would really speed up the engine, the PVS limits already the redraw and RAM is precious on the GBA, but that's my advice, maybe I'm wrong.

For the perspective texture mapping, I wonder if only dividing for each scanline the start and end u/z v/z value by 1/z would be enough to limit disortion? So, there would only be two divisions per sacnline, and we interpolate linearly u and v between xstart and xend.
For the reciprocal table, I've thought of it but how can you index z in functiun of 1/z, cause 1/z isn't contiguous and we can't use it as an array index. Any ideas?
I've seen your engine, and it really rocks. What kind of texture mapping have you used(seems to be perspective even if some times there is distortion)?

If you've already been interested in quake's BSP file format, maybe can you tell me if I'm right about collision detection :
You just check the camera location (or BBox or Bsphere or else...) with the polys contained in the leaf we are actually in. Therefore, what are the use of the polys contained in the nodes(not needed for rendering)?

Lupin, I've already coded a 3D engine for GBA and believe me you see distortion on such a small screen with a simple DDA texture mapper, and it is ugly.

#10117 - Lupin - Wed Aug 27, 2003 3:09 pm

As far as I know collision detection on Q3 (and perhaps Q2) maps is done recursively, it first checks the nodes and then the leafs, it uses the nodes/leafs planes for checking collision and sliding along the surface - the actual polygon data is not involved in this process.