#118753 - qw3rty - Fri Feb 16, 2007 12:57 pm
I have to calculate the normal from triangles.
The x and z- positions of the vertices is fixed, only the y-position changes.
I do currently claculate the cross-product of two sides of the triangle.
Is there a faster way to do, because I use smooth-shading, and have to calculate 6 normals per vertex (one per surrounding triangle) - that takes away almost 2/3rd of the cpu-time :O
(not optimized in any way other than FP-maths - didn't use IWRAM or anything)
#118755 - kusma - Fri Feb 16, 2007 2:05 pm
Is this a grid in the x/y plane with varying height?
If so, you can use the old fashioned heightmap method. It's something like this:
nx = node(x - 1, z) - node(x + 1, z)
ny = height_of_y_axis
nz = node(x, z - 1) - node(x, z + 1)
and then normalize the vector.
edit: note that this won't work too well with high frequency height-maps (but neither does smooth-shading in the first place ;)
#118758 - memoni - Fri Feb 16, 2007 2:36 pm
Code: |
for (each vertex)
normal[i].Set(0,0,0)
for (each face)
{
n = face[i].calcNormal()
normal[face[i].a] += n;
normal[face[i].b] += n;
normal[face[i].c] += n;
}
for (each vertex)
normal[i].normalize() |
Depending on the size of your mesh and the format of the normals you may not need to normalize the face normals. For example you could have a "scratch" normals in 22:10 format, and pack them after normalization. If you dont normalize the face normals larger faces will affect more the direction of the normals, but that often looks better anyway :)
#118796 - qw3rty - Fri Feb 16, 2007 9:07 pm
after going back to a regular mesh (quadratic) from an irregular (odd lines were moved 1/2 gridstep) my normal-calculation got much easier.
After using triangle_strip instead of triangles I got a pretty good performance.
The heightmap method looks interesting though ! smooth shading with the ammount of calculation of flat-shading :D
#118962 - silent_code - Sun Feb 18, 2007 5:40 pm
but you don't recalculate normals each frame, do you? that would be a horrible waste of cpu time. instead, just transform them together with the vertices. there are some good tutorials on this out there.
#119054 - qw3rty - Mon Feb 19, 2007 11:28 am
I'm doing a dynamic water-surface.
I think I have no other choice than calculate the normals everytime the water-surface changes.
But I got it fast enough for my project.
The water with 512 triangles (16x16 grid) takes half of my CPU-time, but the water is my main gfx-feature - so I think it will be ok.
And I still have room for optimization - e.g. put the wave-points into a faster MEM-region.
P.S. : tranforming the normals with the vertices would need matrix-multiplication, right ? I think I wouldn't save that much CPU-time, because the normal-calculation is pretty simple in my case (since I've gone back to a regular grid)
#119072 - Rajveer - Mon Feb 19, 2007 1:33 pm
qw3rty wrote: |
P.S. : tranforming the normals with the vertices would need matrix-multiplication, right ? I think I wouldn't save that much CPU-time, because the normal-calculation is pretty simple in my case (since I've gone back to a regular grid) |
Yeah it would require matrix multiplication for transformations, but wouldn't it be pretty quick as you can make use of the DS's matrix-multiplication hardware? Or doesn't it make that much of a performance difference?
#119094 - qw3rty - Mon Feb 19, 2007 4:31 pm
No idea - calculating 256 smooth normals needs aprox. 50 scanlines (high wave-activity) - and I think most of the time is wasted waiting for RAM-access, which I could minimize by using a faster RAM area.
If I wanted to transform the normals, I would need to read almost as much data, because before I can transform the normal, I need the difference in angle/squew etc.
#119172 - silent_code - Tue Feb 20, 2007 1:00 pm
yes, i meant transforming normals by the hw.
on thing to optimize online recalculation is having precomputed dependencies between triangles that contribute to a smooth normal. normally you would scan through the whole mesh to find these dependencies.
another thing could be a recalculation threshold. you tag each vertex that has moved, then in the recalculating step you just check for the recently transformed vertices and recalculate the triangles normals. if the triangle normals don't differ too much, just don't recalculate the smooth normal. that way you could save a lot of computation, though the worst case would be a little bit worse than brute force (caused by the checking) and it requires a bit more memory for the additional data structures.
#119175 - Mighty Max - Tue Feb 20, 2007 1:45 pm
qw3rty wrote: |
I'm doing a dynamic water-surface.
I think I have no other choice than calculate the normals everytime the water-surface changes.
But I got it fast enough for my project.
The water with 512 triangles (16x16 grid) takes half of my CPU-time, but the water is my main gfx-feature - so I think it will be ok.
And I still have room for optimization - e.g. put the wave-points into a faster MEM-region.
P.S. : tranforming the normals with the vertices would need matrix-multiplication, right ? I think I wouldn't save that much CPU-time, because the normal-calculation is pretty simple in my case (since I've gone back to a regular grid) |
This setup does not need to do the complete maths to calculate the normals. It should be much quicker to derive them from the additional Info then generating them just from the triangles.
In such a grid:
- all normals go into the same half-room
- two components of the vectors to the neighbour are constant
- the triangles contain a 90? angle
Lets say you have a grid in the x-y-plane. A is a vector parallel to the x axis between two points, Y parallel to the y axis. All normals shall go up (adding to z axis)
The x and the y component of the xx and yy vectors are constant. You can now create a a parallel to normal vector with -z component of A in the new x component, the -z component of B as a new y. Put the gridgranularity_x + gridgranularity_y in the new z. If needed (and not done by hardware) cut the vectors length.
This should be much faster then solving trigonometric functions based on the finished mesh.
_________________
GBAMP Multiboot
#119177 - qw3rty - Tue Feb 20, 2007 2:11 pm
I've done the cross-product on paper, and made it as easy as possible - I think I have more or less the thing you are suggesting, mighty max.
the two vectors I do cross are (Y-Axis up):
A = (0,dy_a,step)
B = (step,dy_b,0)
the resulting vector is N = step * (-dy_a,step,-dy_b)
of course I divided N by step, resulting in
N = (-dy_a,step,-dy_b)
#119208 - tepples - Tue Feb 20, 2007 8:17 pm
silent_code wrote: |
on thing to optimize online recalculation is having precomputed dependencies between triangles that contribute to a smooth normal. normally you would scan through the whole mesh to find these dependencies. |
That takes linear time in the number of vertices and faces:
- Allocate a list of lists foundEdges[nVertices][maxEdgesPerVertex]. (If you are using an array for the inner list, maxEdgesPerVertex might be up to 8.)
- Build a list of edges around each vertex by scanning through the triangles. For each triangle, for each vertex in the triangle, add the vertex in the triangle . For instance, add vertexNo[1] to foundEdges[vertexNo[0]], vertexNo[2] to foundEdges[vertexNo[1]], and vertexNo[0] to foundEdges[vertexNo[2]]. By now, you have a list of edges from each vertex in the mesh.
- For each vertex, fit a distance-regression plane to the set of edge points around the vertex. This plane contains the points' centroid, and its normal approximates the smooth normal of the vertex.
Doing steps 1 and 2 once or multiple times is a time-space tradeoff that depends on how many meshes you have vs. how often you deform them.
_________________
-- Where is he?
-- Who?
-- You know, the human.
-- I think he moved to Tilwick.