#60553 - thegamefreak0134 - Fri Nov 11, 2005 6:32 pm
Does anyone have a nice piece of code that will take the 3d coordinates of a point and give me the 2d coordinates on screen based on the camera position and reference point? (meaning that the camera is in one spot and looking at another spot.) This would be extremely beneficial to my 3d engine, but I can't seem to find it anywhere...
_________________
What if the hokey-pokey really is what it's all about?
[url=http:/www.darknovagames.com/index.php?action=recruit&clanid=1]Support Zeta on DarkNova![/url]
#60595 - SittingDuck - Sat Nov 12, 2005 11:03 am
I haven't done much 3D work, but look at TONC.
#60773 - keldon - Mon Nov 14, 2005 6:34 pm
http://www.devmaster.net/articles/viewing-systems/
if you're doing it by reference points and not specifying your camera by rotation values and a position then you use UVN vectors which basically specify the unit vectors of the camera's x,y,z plane
once the coordinates have been rotated & translated to the camera view then you have
Code: |
Eyedistance = CameraPosition.Length
2D.X = 3D.X - ((DX / (3D.Z + EyeDistance)) * 3D.X)
2D.Y = 3D.Y - ((DY / (3D.Z + EyeDistance)) * 3D.Y) |
EDIT: To create the UVN for the camera the N axis is the unit vector of the vector between the camera's position and Point(0,0,0), the vector between the camera position and the focus point. When N is not facing up directly, i.e. (N.y > -0.8) || ( N.y < 0.8 ), Cross N with (0,1,0) to create U. Now normalize U and cross U with N and you get V for free. When N is facing up directly then U remains the same.
#62838 - keldon - Tue Dec 06, 2005 9:36 am
This is all java code but should not be too hard to convert to C. You simply set the view reference point (vrp) and tell it to point to any given position. The getCameraMatrix method returns you the translation + rotation.
Code: |
Point3D vrp;
Vector3D XAxis,YAxis,ZAxis;
public Camera () {
vrp = new Point3D (0,0,0);
XAxis = new Vector3D(1,0,0);
YAxis = new Vector3D(0,1,0);
ZAxis = new Vector3D(0,0,1);
}
public Matrix getCameraMatrix () {
Matrix translation = new Matrix ();
Matrix rotation = new Matrix ();
translation.setTranslationMatrix(-vrp.x, -vrp.y, -vrp.z);
double m [][] = {
{XAxis.x, XAxis.y, XAxis.z,0},
{YAxis.x, YAxis.y, YAxis.z,0},
{ZAxis.x, ZAxis.y, ZAxis.z,0},
{0,0,0,1}
};
rotation.m = m;
rotation = rotation.multiplyMatrix(translation);
return rotation;
}
public void setFocusPoint(Point3D p) {
if ( p.distance( vrp ) < 0.01 ) return;
Vector3D angle = new Vector3D (p.x - vrp.x, p.y - vrp.y, p.z - vrp.z);
angle.normalise();
ZAxis = angle;
if ( ZAxis.y <= 0.8 && ZAxis.y >= -0.8 ) XAxis = new Vector3D(0,1,0).crossProduct(ZAxis);
YAxis = ZAxis.crossProduct(XAxis);
XAxis.normalise();
YAxis.normalise();
} |
#64762 - Cthulhu32 - Mon Dec 26, 2005 1:27 pm
wouldn't be too hard without built in matrix multiplication functions :)
very cool tutorial on rendiering 3D space, I've seen a lot of mixed techniques for speed such as the tunnel effect, but the true 3D matrix calculations are the way to be when it comes to a real 3D engine. Fortunately (well unfortunately to math majors) OpenGL can do a lot of these calculations for you, but if you're going to make an engine on lets say a GBA you'll need to use a method like this. You can also do some researching on Ray Tracing/Beam Tracing (all those techniques, way beyond me) for optimizing the 3D engine.
When I started looking at 3D engines I found this site to be really helpful with getting started : http://www.spacesimulator.net/ its all using OpenGL, but it goes into how you use the Bresenham Line Algorithm to draw textures in polygons, and all that advanced goodness.
#101310 - keldon - Mon Sep 04, 2006 9:59 am
So did this help you in the end?
#101326 - kusma - Mon Sep 04, 2006 12:46 pm
keldon:
actually, projective divide is usually done like this:
x' = tx + (x / w) * sx
y' = ty + (y / w) * sy
z' = tz + (z / w) * sz
this assmes homogenous coordinates. tx, sx, ty, sy, tz and sz is viewport mapping coefficients calculated more or less like this:
tx = viewport_x + viewport_width / 2
sx = viewport_width / 2
ty = viewport_y + viewport_height / 2
sy = viewport_height / 2
tz = z_near + (z_far - z_near) / 2
sz = (z_far - z_near) / 2
also note that z' is usually only needed if you're doing z-buffering.
#108281 - Peter - Tue Nov 07, 2006 1:59 pm
The DirectX documentation is a great resource for this. Microsoft provides most formulas you need for 3d math. The explanations also also quite good, better than most tutorials out there. They even don't only discuss about their d3d api, they also provide more general information such as 3d graphics rendering pipeline stages etc.
Hope it helps
#108307 - thegamefreak0134 - Tue Nov 07, 2006 5:55 pm
Cool. I already have the formulas required. What I lacked when I asked this question a couple of months ago was the knowledge that the formula doesn\'t include a camera location. You have to first translate the scene and rotate it properly to create a \"camera\" view. The actual 3D part is a simple projection formula, but the effect we are used to seeing that defines 3D is some rotational matrix math.
What I do not yet understand is how to quickly fill a polygon. The best method I have seen so far is starting at the top, figuring out the linear math for the edges of the triangle, and drawing horizontal lines down the triangle. But this seems awfully redundant, and it seems like there should be a way to figure out the scene on a per-pixel basis, rather that having to draw out every little part. (As in, start at the top left, work to the bottom right, and draw one pixel at a time.)
If there is a way to do this, please point the way. If not, I am pretty much set in the 3D department. Thanks though!
_________________
What if the hokey-pokey really is what it's all about?
[url=http:/www.darknovagames.com/index.php?action=recruit&clanid=1]Support Zeta on DarkNova![/url]
#108317 - kusma - Tue Nov 07, 2006 8:10 pm
if the triangle is small enough, it may be optimal to use three line-distance functions to decide if a pixel is drawn or not.
#109037 - FluBBa - Tue Nov 14, 2006 8:46 am
thegamefreak0134 wrote: |
it seems like there should be a way to figure out the scene on a per-pixel basis, rather that having to draw out every little part. (As in, start at the top left, work to the bottom right, and draw one pixel at a time.) |
Well that's more or less ray tracing, isn't it?
_________________
I probably suck, my not is a programmer.
#109046 - keldon - Tue Nov 14, 2006 11:47 am
There is a way to handle the polygons so that on each line you examine the polygons being drawn, and disect the polygons at intersections along x plane and then simply decide which segment comes before which (can't remember the name). There is raytracing. But the computation required to do this makes it more costly than the z-buffer. Ray casting on the other hand (used in wolfenstein) works like ray casting but only on a single horizontal line. This only works in those types of games - I think duke nukem 3d still used ray casting but used scaling to give the illusion of looking up.
#109047 - kusma - Tue Nov 14, 2006 12:21 pm
keldon wrote: |
There is a way to handle the polygons so that on each line you examine the polygons being drawn, and disect the polygons at intersections along x plane and then simply decide which segment comes before which (can't remember the name).
|
Sounds like the technique called S-buffering ("span" or "segments" buffering) to me. Basicly you scan-convert all polygons, and insert each span into a list of spans for each scanline. Those spans can be clipped against each other, resulting in an overdraw-factor of 0. Unfortunately, the datastructures and general overhead of inserting and clipping the spans usually means that it's only really effective if you have big polygons and a lot of overdraw (ie big quake-type scenes and no occlusion culling or pvs etc). As this is not really a common case, it's usually not worth the hassle to implement S-buffers.
#109048 - keldon - Tue Nov 14, 2006 12:30 pm
That's the one.
#109097 - Ant6n - Wed Nov 15, 2006 2:13 am
I wonder whether i'd make sense (on slow gba) to render a 3d object only using points (as oppossed to polygons). i.e. take a model with like 200 points/vertices (i.e. q1 model), project all of them, z-order them and put them on the screen, preferably making big blobs if the object is close. That'd be sorta similar to voxel rendering. If one aims for more speed, one could even find out one orthogonal projection that sort of approximates across the perspective of all the points of the 3d model. Did anybody ever see something like that?
Wanted to implement something like that, but didn't find the time yet (will this semester, and then the other right after, ever end!)
anton
#109133 - kusma - Wed Nov 15, 2006 11:08 am
Ant6n wrote: |
I wonder whether i'd make sense (on slow gba) to render a 3d object only using points (as oppossed to polygons). i.e. take a model with like 200 points/vertices (i.e. q1 model), project all of them, z-order them and put them on the screen, preferably making big blobs if the object is close. That'd be sorta similar to voxel rendering. If one aims for more speed, one could even find out one orthogonal projection that sort of approximates across the perspective of all the points of the 3d model. Did anybody ever see something like that?
Wanted to implement something like that, but didn't find the time yet (will this semester, and then the other right after, ever end!)
anton |
As a general rendering strategy it doesn't make much sense. having to transform and project each pixel is not effective. However, it can make sense with a slightly different technique: if a polygon is relatively small (so that polygon setup would be a bottle-neck), you could instead recursively subdivide it until each edge is less than 1 pixel long, and draw a dot at the points of subdivision. This is AFAIK an optimization done in renderers like quake.
#109146 - tepples - Wed Nov 15, 2006 4:09 pm
Connect the points with lines, and you have Battlezone (arcade) and Red Alarm (Virtual Boy).
_________________
-- Where is he?
-- Who?
-- You know, the human.
-- I think he moved to Tilwick.
#109164 - Peter - Wed Nov 15, 2006 6:44 pm
kusma wrote: |
Sounds like the technique called S-buffering ("span" or "segments" buffering) to me. Unfortunately, the datastructures and general overhead of inserting and clipping the spans usually means that it's only really effective if you have big polygons and a lot of overdraw (ie big quake-type scenes and no occlusion culling or pvs etc). As this is not really a common case, it's usually not worth the hassle to implement S-buffers. |
One thing that pops into my head about the s-buffer (z-buffer too) is the possibility to sort spans by material. This might sound stupid at the first thought, but it would give you the option to store a texture into internal work ram and then batch-draw all polygons which use that texture. Transfering the texture from rom to ram costs some time too and I'm not really certain if you win any performance, but it was worth to bring up imo ;)
#109167 - tepples - Wed Nov 15, 2006 7:00 pm
Peter wrote: |
One thing that pops into my head about the s-buffer (z-buffer too) is the possibility to sort spans by material. This might sound stupid at the first thought, but it would give you the option to store a texture into internal work ram and then batch-draw all polygons which use that texture. |
Like the workaround for the PS2's lack of VRAM?
_________________
-- Where is he?
-- Who?
-- You know, the human.
-- I think he moved to Tilwick.
#109244 - Ant6n - Thu Nov 16, 2006 9:53 am
too minimize texture data one could also use something like a portal engine and keep a visible set for all sectors. then one can cache only the textures that are used anyway. reduce the visible set by only keeping sectors that are in front of the camera. in a next step on could calculate the (approx.) distance for each sector and use low res textures for far away ones (i.e. mipmap per sector). most of this information shouldnt change much from one frame to the next, so can be reused. one could cook up something like "descent" with that.
... on a side note, projection could be done using only simply trigonometrie and dotproducts, coming down to a couple of dot products and multiplies and one division per projection. although in the end a couple of dot-products makes a matrix multiply. but then again one can save a few cycles when reducing to 3,4,5 degrees of freedom.
#109247 - Master S - Thu Nov 16, 2006 10:58 am
It would be posible to materialsort your s-buffer spans, but I doubt how usefull it would be. From our experiments with sbuffers on the gba, I must say that it was slower than just bruteforce draw the polys an live with the overdraw, for the scene-complexcity (or lake of) posible on the gba. On top of that, I can only think of one realy efficent way of making the innerloop of a affine texturemapper, and that will only work with 256*?? textures -> not enough space in iwram (where you would also keep a unrolled version of you speed critical routines).
We have been thinking of ways to minimise overdraw, or in paticulare texel fetching of texels for pixels not visible in the final scene, as the 2waitstate delay for every texel is quite annoing.
It might be worth it to make a more complex innerloop (code wise) if one could get more effecent texel fetches, we still do have some un-implemented ideers that could be fun to try.
Keep the ideers comming, we could all end up being inspired to do something cool :-)
#109256 - kusma - Thu Nov 16, 2006 1:35 pm
Master S wrote: |
It would be posible to materialsort your s-buffer spans, but I doubt how usefull it would be. From our experiments with sbuffers on the gba, I must say that it was slower than just bruteforce draw the polys an live with the overdraw, for the scene-complexcity (or lake of) posible on the gba. On top of that, I can only think of one realy efficent way of making the innerloop of a affine texturemapper, and that will only work with 256*?? textures -> not enough space in iwram (where you would also keep a unrolled version of you speed critical routines).
|
the texture size can be coped with in a number of ways, so this isn't really a valid point. having all unrolled crucial routines in iwram at the same time doesn't strike me as a very good idea, so there should definitely be room for atleast a small texture cache. if you can utilize it efficiently or not is a question of filler-design, really.
Master S wrote: |
We have been thinking of ways to minimise overdraw, or in paticulare texel fetching of texels for pixels not visible in the final scene, as the 2waitstate delay for every texel is quite annoing.
|
We've also been working on ways to reduce overdraw, and a basic c-buffer seems compelling. However "the ultimate solution" is IMO something completely different, but tricky to implement. Why scan-convert? ;)
Master S wrote: |
It might be worth it to make a more complex innerloop (code wise) if one could get more effecent texel fetches, we still do have some un-implemented ideers that could be fun to try.
|
i believe in innerloop-generation, so you specialize the innerloop based on the data it will process. since stuff like deltas and texture addresses are constant for the entire polygon, you can really do some clever dataset inspections.
Master S wrote: |
Keep the ideers comming, we could all end up being inspired to do something cool :-)
|
amen.
#109257 - keldon - Thu Nov 16, 2006 2:19 pm
How about first drawing a flat shaded scene at a reduced res, then using this scene to then decide which polygons textures need to be drawn at which place. It might then be a viable option to draw perspective correct textures as we will only have divides for pixels that are going to be drawn.
And to reduce the problem of pixels not being drawn from using a reduced res to detirmine drawing, we could draw pixels from all polygons at edges in the low res map.
EDIT: although it would require that we do not perform the perspective transformation twice for each polygon.
#109277 - Ant6n - Thu Nov 16, 2006 7:49 pm
i think keldon wants perspective correct drawing.
How about second degree polynomial approximations here, you only need 2 adds per step to get those. (though finding good approximations is hard).
If one uses a good visible detecion algorithm and sticks to convex sectors etc then one would have very little overdraw, i once heard that quake1 only had like an overdraw of 1.5 to 2...
maybe a good way to only keep material in cache that is drawn anyway is to keep some memory as cache andput all the material in it on-demand. Keep a flag for every material at which frame it was last used. at the end of each frame, get rid of all material that wasnt used during that frame and move all remaining material to the front so that the new empty space is in the back of the cache. then one has to move a lot of cached memory around, but even if you have a 32K its not so bad cuz one can theoretically move 2 bytes per ccyle
#109281 - Master S - Thu Nov 16, 2006 8:43 pm
I did'nt mean to keep all speed critical rutines in iwram at once. Right now I use a part of my iwram as a stack where i can put code and data temporarly while needed. Anyway, the copy to iwram is not free, so abusing this copy just in time scheme is not to recoment.
I still dont se how to get room for at texture with a acceptable reolution for a more or less fullscreen scene, even though i admit that 256*256 is more than enought, it just opens up for some nice optimizations :)
Quote: |
based on the data it will process. since stuff like deltas and texture addresses are constant for the entire polygon, you can really do some clever dataset inspections |
Aggree, my current implementation of a affine texturemapper only has 5 instructions in the inner loop (fetch texel, plot pixel and advance u/v).
Quote: |
Why scan-convert? ;) |
Sounds interresting, could you reveal a little more ?
#109286 - Ant6n - Thu Nov 16, 2006 10:08 pm
Master S wrote: |
Anyway, the copy to iwram is not free, so abusing this copy just in time scheme is not to recoment. |
i basicly assume that there is no change from one frame to another, so the textures stay in the cache, and it doesnt matter when they are being copied into ram. that only works if you can keep a full frame worth of textures plus the textures of the last frame that wasnt used anymore in your cache.
Quote: |
Aggree, my current implementation of a affine texturemapper only has 5 instructions in the inner loop (fetch texel, plot pixel and advance u/v). |
so how many pixels can you push per second alltogether? dont you need advance in x? How can you put code somewhere "temporarily" - wouldnt that need you to know the size of it?
on ideas:
one can push 32 transformed sprites per scanline. if one uses sprites that only have triangles on them (the rest being transparent), one could affinely map these across the screen as arbitrary shaped trinagles, as long as they are not too big (and one doesnt want subpixel accuracy when it comes to positioning). I wonder whether one could create something that.
#109331 - Master S - Fri Nov 17, 2006 9:04 am
Quote: |
so how many pixels can you push per second alltogether? |
I dont know, havent been benchmarking it.
Quote: |
dont you need advance in x? |
Ofcourse, my mistake, it also increments x in that 5 instructions
Quote: |
How can you put code somewhere "temporarily" - wouldnt that need you to know the size of it |
It sure does, but you can get that from the assembler (and mayby also from gcc, haven't tryed), by putting a few labels in before and after the code.
It might be posible for the current tool-chain to automaticly work with ram-overlays, havent investigated it, I just compile/assembler to rom and copy to iwram as needed.
#109337 - kusma - Fri Nov 17, 2006 11:55 am
Ant6n wrote: |
i basicly assume that there is no change from one frame to another, so the textures stay in the cache, and it doesnt matter when they are being copied into ram. that only works if you can keep a full frame worth of textures plus the textures of the last frame that wasnt used anymore in your cache.
|
That's a pretty useless assumption, you'll definitely use more than 32k of textures per frame, unless you're doing some kind of retarded manga-lookalike rendering. ;)
Quote: |
on ideas:
one can push 32 transformed sprites per scanline. if one uses sprites that only have triangles on them (the rest being transparent), one could affinely map these across the screen as arbitrary shaped trinagles, as long as they are not too big (and one doesnt want subpixel accuracy when it comes to positioning). I wonder whether one could create something that.
|
This has ofcourse been done before ;)
http://www.pouet.net/prod.php?which=5235
it's a cool idea, but i find it a bit too restrictive ;)
#109338 - kusma - Fri Nov 17, 2006 12:02 pm
Master S wrote: |
I still dont se how to get room for at texture with a acceptable reolution for a more or less fullscreen scene, even though i admit that 256*256 is more than enought, it just opens up for some nice optimizations :)
|
Why do you need a full texture in iwram at once? If you can find out what memory you'll need for what portion of the screen, you can load only that memory into iwram (and get better wait-states, as you can block load).
Master S wrote: |
Quote: | based on the data it will process. since stuff like deltas and texture addresses are constant for the entire polygon, you can really do some clever dataset inspections |
Aggree, my current implementation of a affine texturemapper only has 5 instructions in the inner loop (fetch texel, plot pixel and advance u/v).
|
well, if you're doing mipmapping, then you'll allways up-sample the texture. when up-sampling, some texels are drawn twice. this means that you don't always have to fetch a texel for every pixel you draw. Even better, you can - if these cases happen on a 16bit aligned address, store both pixels in a single store instruction. Now how cool is that? ;)
Master S wrote: |
Quote: | Why scan-convert? ;) |
Sounds interresting, could you reveal a little more ? |
I wish I could, but this is trade-secrets etc. It's a bitch to work in the 3d graphics industry ;)
#109345 - Master S - Fri Nov 17, 2006 1:39 pm
Quote: |
If you can find out what memory you'll need for what portion of the screen, you can load only that memory into iwram (and get better wait-states, as you can block load). |
Point taken. I actually did think about that, it's just that no god methods for predicting a god "texture-chunk" to cache pops into my head :) Maby I need to think some more about that
Quote: |
well, if you're doing mipmapping, then you'll allways up-sample the texture. when up-sampling, some texels are drawn twice. this means that you don't always have to fetch a texel for every pixel you draw |
Well again - point taken. Actually my hobby gba todo list tells me to implement per-polygone mipmapping, as I want to try this :)
Quote: |
Even better, you can - if these cases happen on a 16bit aligned address, store both pixels in a single store instruction. Now how cool is that? ;) |
Preaty cool
Quote: |
I wish I could, but this is trade-secrets etc. It's a bitch to work in the 3d graphics industry ;) |
Too bad, but ofcourse I'll have to respect that. Any change that you can tell if it's actually a method implemented in the 3d hardware you are working on by now ?
#109350 - kusma - Fri Nov 17, 2006 2:53 pm
Quote: |
Quote: | I wish I could, but this is trade-secrets etc. It's a bitch to work in the 3d graphics industry ;) |
Too bad, but ofcourse I'll have to respect that. Any change that you can tell if it's actually a method implemented in the 3d hardware you are working on by now ? |
Well, I guess I can tell _something_ about it. It's generally an algorithm for rendering polygons that look a lot more like what hardware rasterizers usually do (and that's where the idea has come from). The algorithm greatly reduce the cost of polygon setup, and completely removes the need for frustum clipping. Another issue it copes with quite nicely is overdraw; we're basically staying at zero overdraw at almost no cost. So the answer is more or less yes and no. It originates from that, but we have modified it quite a bit to better suit software rendering on low-memory systems.
#109386 - Ant6n - Fri Nov 17, 2006 10:14 pm
kusma wrote: |
That's a pretty useless assumption, you'll definitely use more than 32k of textures per frame, unless you're doing some kind of retarded manga-lookalike rendering. ;)
|
the assumption is not useless if one uses 8 bit textures at half horzontal resolution (to get around the limitation of the 16bit vram bus quickly), makes 19200 bytes per frame. assuming little overdraw, upsampling on textures and also that one can still use wram when the cache gets too big; then this is not completely useless. the basic idea is that the textures used most often (the ones you always see) stay in cache, and will move to the front where they would be in iwram.
#109712 - kusma - Mon Nov 20, 2006 5:09 pm
Ant6n wrote: |
the assumption is not useless if one uses 8 bit textures at half horzontal resolution (to get around the limitation of the 16bit vram bus quickly), makes 19200 bytes per frame.
|
How is halving the horizontal TEXTURE SIZE going to help you fight the 16bit vram bus? a texture can be rotated in any direction, causing the halving to be in vertical direction instead. I also wonder where you get the 19200 bytes number from, as 256x256 textures is 65536 bytes big, 128x256 is 32768 bytes, 128x128 is 16384, and so on...
The only thing I can think of that matches your 19200, is 120x160 (half of the screen resolution in the gba), but this certainly has nothing to with texture sizes. If you're drawing screen-aligned pixels, then you are _not_ doing normal texture-mapping.
Ant6n wrote: |
assuming little overdraw, upsampling on textures and also that one can still use wram when the cache gets too big; then this is not completely useless. the basic idea is that the textures used most often (the ones you always see) stay in cache, and will move to the front where they would be in iwram. |
Are we even talking about the same thing here?
But yes, other memory (it would be ROM in the case I'm talking about) could be used for backup, and iwram for the most used textures. A static per-scene caching of _some_ of the textures in a scene does indeed make sense, but you clearly specified that you was going to keep a _full_ frame of textures in the cache. This is something I still consider a useless assumption. Sure, you can make datasets that are that way - but not without annoying the hell out of your graphician ;)
#109785 - Ant6n - Tue Nov 21, 2006 5:51 am
Quote: |
How is halving the horizontal TEXTURE SIZE going to help you fight the 16bit vram bus? a texture can be rotated in any direction, causing the halving to be in vertical direction instead. |
my assumption (gee) would be to use 8 bit textures, but you cant write bytes directly into vram, only half and full words. the simplest (and fastest) way around that is to just double two pixels next to each other on screen (if i remember right that is what happens when writing a byte into vram, anyway).
Then, if one wants to upsample, one has to use different mipmap levels - if you wanna go very fancy you can create mip-map levels like 64x32,32x64 for a 64x64 texture so that one can upsample without overdoing it in the other dimension...
And yes I know there is difference between 'TEXTURE SIZE' and the size of the screen, yet there is a correlation between them if you assume little overdraw and upsampling of textures. if you upsample than you wont need a 128x128 textures most of the time.
and yes, i made a mistake - i meant to say to keep a cache in iwram and another one in ewram if that one is full, with the longest used textures in iwram.
I am sorry to hear that this idea does not work for you, but maybe some people are interested in making a "retarded manga-lookalike"; in my eyes more interesting than a quake-clone. of course its my mistake that the phrase "Keep the ideers comming" does not refer to ideas that you decide trying to shoot down. Instead of labelling other's idead useless, you are invited to reveal your own. I, for my part, like people's wierd ideas and find it a pity when people find them useless because on first sight they seem too constraining etc.
Anton
#109811 - kusma - Tue Nov 21, 2006 12:20 pm
Ant6n wrote: |
my assumption (gee) would be to use 8 bit textures, but you cant write bytes directly into vram, only half and full words. the simplest (and fastest) way around that is to just double two pixels next to each other on screen (if i remember right that is what happens when writing a byte into vram, anyway).
|
So far you are correct, yes.
Ant6n wrote: |
Then, if one wants to upsample, one has to use different mipmap levels - if you wanna go very fancy you can create mip-map levels like 64x32,32x64 for a 64x64 texture so that one can upsample without overdoing it in the other dimension...
|
No, because a texture can be rotated in any direction. Your suggestion here only applies for textures that are aligned with the screen.
Ant6n wrote: |
And yes I know there is difference between 'TEXTURE SIZE' and the size of the screen, yet there is a correlation between them if you assume little overdraw and upsampling of textures. if you upsample than you wont need a 128x128 textures most of the time.
|
No, those correlations aren't really there. Textures are mapped in a 3d-space. Meshes are rotated and projected onto the screen. These transformations usually remove most of these correlation. If they are there, you're usually not even in need of using a general poly-filler at all.
You bring up mipmapping as a solution, but normal mip-mapping is a uniform scale of both the s and t coordinates, so it doesn't solve the double-pixels problem. You COULD improve the correlations by creating all combinations of all mip-map reductions on both axes, But then you've multiplied your texture data size by four - effectively just hurting caching.
Ant6n wrote: |
and yes, i made a mistake - i meant to say to keep a cache in iwram and another one in ewram if that one is full, with the longest used textures in iwram.
|
Well, duh. If what I criticized for being a useless assumption wasn't what you intended to explain, then how am I the bad guy here? I only said that i found what you explained a useless assumption. I can't mind-read.
Ant6n wrote: |
I am sorry to hear that this idea does not work for you, but maybe some people are interested in making a "retarded manga-lookalike"; in my eyes more interesting than a quake-clone. of course its my mistake that the phrase "Keep the ideers comming" does not refer to ideas that you decide trying to shoot down. Instead of labelling other's idead useless, you are invited to reveal your own. I, for my part, like people's wierd ideas and find it a pity when people find them useless because on first sight they seem too constraining etc.
|
I think you're taking this whole thing a bit too personal here. We're discussing a technique in particular, and I did give your idea credit when having a fall back storage. It was the assumption i meant was useless, and the "retarded manga-lookalike" is clearly a personal opinion.