#95083 - mike260 - Thu Jul 27, 2006 7:25 pm
Hello all,
Here's a few random thoughts about improving texture-quality.
1) El-cheapo mipmapping:
- For each texture, get the average colour of all it's texels
- When rendering with a texture, switch on fog and set the fog colour to this colour
- Fiddle with the fog settings to adjust the strength of the effect
The idea is to reduce the contrast on minified textures, in order to reduce the sparkles. However, it doesn't help with sparkles on edge-on polys. Also, if you have several textures that need to fit together seamlessly, they should all use the same fog colour.
2) You can lessen the effects of unfiltered texture magnification just by increasing the texture's resolution. However, you still run into the problem that you can't get smooth gradients with 16bit colour, so you apply trick (3)...
3) Texture normalisation: Although each texel is only 16bit, the DS appears to have a 24bit colour pipeline (you don't see as much banding on gouraud-shading as you'd expect). So you can make better use of this precision like so:
- Start off with your 24bit source-texture
- Get the maximum R, G and B values used by the texture
- Scale each texel's colour by (255/max_R, 255/max_G, 255/max_B)
- Convert the texture to 16bit as normal
- When rendering with this texture, use a vertex colour to scale the texel-colours back down to the correct range.
This'll help most with darker textures; if the texture contains bright texels, it won't do much at all. You might be able to extend the method to deal with bright textures, if you can somehow get the DS to add a constant colour to the model when rendering.
BTW, can anyone verify that the DS has a 24bit internal colour pipeline?
4) Normal-mapping:
- Get an objectspace normal-map texture for your model
- Palettise it, to get an 8bit texture and a 256-entry table of normals
- At rendering-time:
- Transform your light directions into objectspace
- Light each normal in your 256-entry normal-table using the objectspace lights
- Upload the resulting 256-entry table of colours as a palette
- Render the model
This one'll only work for rigid models, and if you want textures on your model you'll need to render another pass over it. Plus, the lighting will look oddly pixellated if your normal-map isn't high-res enough.
_________________
"Ever tried? Ever failed? No matter. Try Again. Fail again. Fail better."
-- Samuel Beckett
#95088 - Lick - Thu Jul 27, 2006 8:02 pm
You should start a website or weblog containing tips for 3D DS programming. That part of the scene is really lacking right now! Help!
Nice tricks anyway, but I haven't been doing 3D on DS so I can't use the info.
_________________
http://licklick.wordpress.com
#95113 - silent_code - Thu Jul 27, 2006 10:42 pm
damn good ideas! i'll definitely try some of them as soon as possible! (waiting for hardware...)
more of them would be sooo nice ;) [pleeeeeeeeez!] :)
#95115 - tciny - Thu Jul 27, 2006 11:00 pm
I find the normal mapping idea rather interesting, but here's a few bottlenecks that I expect (I'd be glad to hear your thaughts on this):
Generally, you'll propably always want a diffuse+normal map method, so I guess it'd be best to store the normals in an array and then modfiy the actual texture luma per pixel before rendering.
The biggest problem I see is getting the normal for each texel. This requires you to interate over each face of the mesh.
I had another idea, but unfortunately that ones wastes and _awful_ lot of texture space. (but is rather CPU friendly) For each rigid mesh, you store a diffuse and 6 luma maps (aka. 2 RGB textures with each channel being one luma map) for light from above, below, left, right etc.. Now you, again, calculate the light position in object space and blend the textures to approximate the lighting.
So, before rendering, you modify the actual texture by adjusting its luminosity.
Memory requirements would be say 512for diffuse and 2x256 for luma maps.
Having roughly tested the look of things using this techniques I can say that the results aren't perfect but do add a lot of visual detail.
#95134 - sajiimori - Fri Jul 28, 2006 12:06 am
Or you could write a software renderer. :P
#95148 - DynamicStability - Fri Jul 28, 2006 12:56 am
You guys should fill in your profiles with your websites and what not.
_________________
Framebuffer is dead.
DynaStab.DrunkenCoders.com
#95161 - mike260 - Fri Jul 28, 2006 3:14 am
Oh yeah, one last one. It's a bit obscure and hard to explain, but can be useful. Fun with palettes:
First off, you have to write your own custom palettiser that can cope with any number of values per pixel (instead of just R, G and B).
Once you've done that, you can take a number of textures (all the same size), stack them together, and palettise them as if they were a single image. This gets you a single texture, plus one palette for each of your original textures.
This single texture magically contains all of your original textures - which palette you use with the texture determines which of your original textures appears when you render something.
You get the best results from combining similar textures - say, textures from the same tileset, or normal+damaged textures.
Why bother?
1) It saves space. If you combine similar textures, you'll get better quality for the same size (or a smaller size at the same quality) than if you stored them seperately.
2) You can do nice effects:
Say you have an animating water texture with 4 frames of animation. Since the DS doesn't do multitexturing, you'd normally have to just flick from one frame to the next. But if you combine all the frames into a single texture, then blending palettes together will give the same effect as blending the original textures together. This means you can smoothly animate your texture instead of flipping between frames.
_________________
"Ever tried? Ever failed? No matter. Try Again. Fail again. Fail better."
-- Samuel Beckett
#95168 - mike260 - Fri Jul 28, 2006 3:26 am
tciny wrote: |
I find the normal mapping idea rather interesting, but here's a few bottlenecks that I expect (I'd be glad to hear your thaughts on this):
Generally, you'll propably always want a diffuse+normal map method, so I guess it'd be best to store the normals in an array and then modfiy the actual texture luma per pixel before rendering.
The biggest problem I see is getting the normal for each texel. This requires you to interate over each face of the mesh. |
The idea is that the normals are all in objectspace, not tangentspace. So once you get your light-directions into objectspace, you just need to light the 256 normals; because the texture's palettised, the GFX hardware will take care of applying the right colours to the right pixels.
_________________
"Ever tried? Ever failed? No matter. Try Again. Fail again. Fail better."
-- Samuel Beckett
#95169 - Inopia - Fri Jul 28, 2006 3:27 am
About the normal mapping: we used to do that all the time in the pc-demoscene. Basically since 1994 or so. It doesn't only work for rigid models, you can easily recalculate the vertex normals every frame. This is a linear-time algo so it doesn't explode calculation-wise when your models have more faces/vertices.
#95171 - mike260 - Fri Jul 28, 2006 3:30 am
Inopia wrote: |
About the normal mapping: we used to do that all the time in the pc-demoscene. Basically since 1994 or so. It doesn't only work for rigid models, you can easily recalculate the vertex normals every frame. This is a linear-time algo so it doesn't explode calculation-wise when your models have more faces/vertices. |
I meant per-texel normals, not per-vertex - the DS already does per-vertex lighting pretty well.
(edit) Or did you mean that you guys did tangent-space normal-mapping?
_________________
"Ever tried? Ever failed? No matter. Try Again. Fail again. Fail better."
-- Samuel Beckett
#95193 - Inopia - Fri Jul 28, 2006 4:50 am
Ah, beh, I read to fast. That per-texel normal mapping method looks pretty sweet. I think I've seen a similar method described somewhere a long-long time ago, but at the time I remember thinking that that would look kinda crappy since you'd only have 256 distinct normals. I have never actually tested it, so I'd be interested in some results. Do you have some images or something I could see?
What kind of palettezing method do you use? I think using a 1d karnaugh map would suit this method of yours nicely.
I like this idea because a linear transformation guarantees that the 'distances' between vectors before the transformation, remain the same after the transformation. This means that the quantization you did in precalc keeps it's quality.
rom please :)
#95207 - tepples - Fri Jul 28, 2006 5:53 am
Take a unit mesh such as a cube or icosahedron and subdivide it a few times. This will give you a unit polyhedron with many more sides, which approximates a sphere. Quantize normals to this unit polyhedron.
_________________
-- Where is he?
-- Who?
-- You know, the human.
-- I think he moved to Tilwick.
#95231 - Inopia - Fri Jul 28, 2006 8:32 am
tepples: why? Wouldn't using any simple quantizing method yield a more precise approximation?
#95239 - mike260 - Fri Jul 28, 2006 10:47 am
Inopia wrote: |
What kind of palettezing method do you use? I think using a 1d karnaugh map would suit this method of yours nicely. |
I currently use optimised median cut. Also I pre-transform images into HSV space so I can use a perceptual distance function (but not for normal-maps, obviously).
What's a karnaugh map, and how do I use it for palettising?
Sorry...I've implemented this on other platforms, but there's no DS version yet.
_________________
"Ever tried? Ever failed? No matter. Try Again. Fail again. Fail better."
-- Samuel Beckett
#95261 - Lick - Fri Jul 28, 2006 1:32 pm
People people, this topic is HOT. The only thing we need is screenshots of the above described methods. I volunteer to run them on hardware.
_________________
http://licklick.wordpress.com
#95294 - Inopia - Fri Jul 28, 2006 5:24 pm
a 1D karnaugh map is basically a very simple form of neural network. For a description and source google for neuqant.c
and a PC implementation would be interesting as well.. do you have one with bilineair filtering disabled so I can see how it will look on a DS-like machine?
#95308 - tepples - Fri Jul 28, 2006 5:56 pm
Inopia wrote: |
tepples: why? Wouldn't using any simple quantizing method yield a more precise approximation? |
I was pointing out the easiest way to construct a uniform codebook over the surface of a sphere.
_________________
-- Where is he?
-- Who?
-- You know, the human.
-- I think he moved to Tilwick.
#95334 - Inopia - Fri Jul 28, 2006 8:57 pm
tepples: I don't want to start an argument, but I don't see how your method is easier if you already have a normal map.
#95390 - tepples - Sat Jul 29, 2006 4:16 am
If you have generated a normal map with "typical" tools, it's probably with deep pixels. This mapping method requires quantizing the normal map to a codebook (or "palette") of about 255 distinct unit vectors. You can generate the codebook using median cut, or you can generate it using a uniform codebook on the unit sphere. Especially if you have more than one object, a uniform codebook may work better.
_________________
-- Where is he?
-- Who?
-- You know, the human.
-- I think he moved to Tilwick.
#95408 - Payk - Sat Jul 29, 2006 7:16 am
In my engine i got a terrrain. The grid is 13x13...very simple...but with help of glcolor it looks more rounded
[Images not permitted - Click here to view it]
^^here in both right pix u can see that with help of glcolor u can make simple meshes look complexer...and u can see that light also could make things look more complicated...the Char there has 150 polygones exactly...the womens and that other man have a bit more....but less then 160...textures are all 8bit...so there are many tweaks to increese the look..
Last edited by Payk on Sat Jul 29, 2006 4:04 pm; edited 1 time in total
#95416 - crossraleigh - Sat Jul 29, 2006 8:22 am
tepples wrote: |
If you have generated a normal map with "typical" tools, it's probably with deep pixels. This mapping method requires quantizing the normal map to a codebook (or "palette") of about 255 distinct unit vectors. You can generate the codebook using median cut, or you can generate it using a uniform codebook on the unit sphere. Especially if you have more than one object, a uniform codebook may work better. |
Having several objects doesn't make the normals in the normal maps uniformly distributed. If you have more than one regular image to quantize, you don't use web-safe colors just because you have to merge multiple palettes do you?