gbadev.org forum archive

This is a read-only mirror of the content originally found on forum.gbadev.org (now offline), salvaged from Wayback machine copies. A new forum can be found here.

DS development > NORMAL_PACK

#93523 - tciny - Wed Jul 19, 2006 6:10 pm

Hi,

I'm currently writing a viewer (on Windows/OSX) for checking files that were saved in a model format that I'm developing for the NDS.
Now, to save time, i store normals in the format that NORMAL_PACK returns already.

My problem now is: I want to get the v10 values out of there as floats again. I tried pretty long and hard, but I keep losing the sign. (Negative numbers like -0.5 appear as 1.5; very odd)

This is what I wrote so far:
#define v10tofloat(n) (((float)(n)) / (1<<9))
#define NORMAL_UNPACK_X(x) ((v10)((x) & 0x3FF))
#define NORMAL_UNPACK_Y(y) ((v10)(((y) >> 10) & 0x3FF))
#define NORMAL_UNPACK_Z(z) ((v10)((z) >> 20))

The corresponding defines from the devkit are
#define floattov10(n) ((v10)((n)*(1<<9)))
#define NORMAL_PACK(x,y,z) (((x)&0x3FF) | (((y) & 0x3FF)<<10) | ((z)<<20))

It works fine for positive numbers... any ideas where the problem might be?

Also: Am I correct when I think that values like 1.0 are very problematic as they occupy the same bit as the sign would? Right now, while exporing, I conver values like 1.0 to something like 0.999...

Any help is greatly appreciated! :)

#93540 - ecurtz - Wed Jul 19, 2006 7:59 pm

You're going to need to mask off and test the sign bit yourself.

Something like:

Code:
#define v10tofloat(n) ((n) & (1<<9)) ? -((float)(n) / (1<<8)) : ((float)(n) / (1<<8))


Of course if you're using macros like that you need to make sure you don't have an expression for n...

#93644 - tciny - Thu Jul 20, 2006 11:12 am

Thanks, but I'm still getting very weird errors. -256 now becomes 3...

Here's what I do for debugging:
Code:

#define floattov10(n)       ((v10)((n) * (1 << 9) ))
#define v10tofloat(n)      ((n) & (1<<9)) ? -((float)(n) / (1<<8)) : ((float)(n) / (1<<8))
#define NORMAL_PACK(x,y,z)  (((x) & 0x3FF) | (((y) & 0x3FF) << 10) | ((z) << 20))
#define NORMAL_UNPACK_X(x)   ((v10)((x) & 0x3FF))


uint32   pack = (uint32)( NORMAL_PACK( floattov10( -0.5f ), floattov10( 0.5f ), floattov10( 1.0f ) ) );
v10   unpackedX = NORMAL_UNPACK_X(pack);
float   unpackedFX = v10tofloat(unpackedX);


Any suggestions where the problem might be?
Best regards

#93802 - hellfire - Fri Jul 21, 2006 10:30 am

negative numbers are negated.
what needs to be done is un-negating the lower 10bits and re-negating the whole again:
-(~n+1) = -(0x3ff-n+1) = (n-0x400)

so, in your context, this should work:
#define v10tofloat(n) (n & 0x200) ? ((float)(n-0x400) / 512.0f) : ((float)n / 512.0f)

but normals should actually be scaled by 511.5 (depending on your platforms floating-point rounding behaviour), otherwise an absolutely legal normal of (1,0,0) would overflow to (-1,0,0).


Last edited by hellfire on Fri Jul 21, 2006 11:35 am; edited 2 times in total

#93805 - tciny - Fri Jul 21, 2006 11:14 am

hey, thanks a lot, that actually did the trick.

about the overflows, I changed the exporter so that a 1.0f normal become something like 0.99f. So 512 can never be reached.

Thanks a lot for your help!!