gbadev.org forum archive

This is a read-only mirror of the content originally found on forum.gbadev.org (now offline), salvaged from Wayback machine copies. A new forum can be found here.

Coding > Memory Corruption

#23353 - Gene Ostrowski - Sat Jul 10, 2004 9:14 am

Has anyone run into an odd bug where your variables are getting tromped on unexpectedly when watching them through the GDB debugger?

In my scenario, I have declared some variables in my .cpp file:

s16 var1;
s16 var2;

Then, somewhere in the code is:

var1=10;
var2=20;

If I set up watches on these two variables, put a breakpoint at the var1=10 line and step through the code, as soon as it writes a value to var1, BOTH values change to garbage. I even tried separating them in memory so they are not contiguous and still see garbage:

[Watch screen may show]
var1 = 54223462
var2 = -213334563

In my case, I've even tried explicitly declaring the variables in both EWRAM and IWRAM, and the problem happens wherever I place the variables-- if I initialize them or not. It does not appear that they are in an illegal memory locations (out of bounds), because on the watch window, if I watch the address of the variable, it shows...

&var1 = 0x02010990

... or something that is well within bounds of the memory area in question.

I thought I might be overrunning memory, since I've been working on the source for this project for some time, when suddenly the issue appeared. However, as a test I commented out a good chunk of variable declarations to ensure that I "freed up" much more than I added with my new code.

Anybody ever run into this kind of error before, and how did you fix it?

I'm pulling my hair out!
_________________
------------------
Gene Ostrowski

#23356 - col - Sat Jul 10, 2004 12:35 pm

Gene Ostrowski wrote:
Has anyone run into an odd bug where your variables are getting tromped on unexpectedly when watching them through the GDB debugger?

In my scenario, I have declared some variables in my .cpp file:

s16 var1;
s16 var2;

Then, somewhere in the code is:

var1=10;
var2=20;

If I set up watches on these two variables, put a breakpoint at the var1=10 line and step through the code, as soon as it writes a value to var1, BOTH values change to garbage. I even tried separating them in memory so they are not contiguous and still see garbage:

[Watch screen may show]
var1 = 54223462
var2 = -213334563

In my case, I've even tried explicitly declaring the variables in both EWRAM and IWRAM, and the problem happens wherever I place the variables-- if I initialize them or not. It does not appear that they are in an illegal memory locations (out of bounds), because on the watch window, if I watch the address of the variable, it shows...

&var1 = 0x02010990

... or something that is well within bounds of the memory area in question.

I thought I might be overrunning memory, since I've been working on the source for this project for some time, when suddenly the issue appeared. However, as a test I commented out a good chunk of variable declarations to ensure that I "freed up" much more than I added with my new code.

Anybody ever run into this kind of error before, and how did you fix it?

I'm pulling my hair out!


are you compiling with opimization off?

If you have any optimization level switched on, what you are probably seeing is the registers that will be/have been used for those variables being used for somthing else.

col

#23401 - Gene Ostrowski - Sun Jul 11, 2004 10:56 pm

Well... I'm not sure. I would assume that optimization is off, since I'm using the default HAMLib build makefiles. When compiling for GDB output, the makefile indicates that it uses -O0 -g as the options, and rebuilds from scratch.

So assuming that the default configuration of the makefiles isn't broken, it should be doing it correctly...

Since I posted the original email, I've discovered that if I change the variable to int instead of s16, I can debug it without problems!

That makes absolutely no sense whatsoever, but that's what I see.

Thanks for the reply, and if you have any more insight, please let me know.

BTW, do you know the difference between -O2 and -O3?
_________________
------------------
Gene Ostrowski

#23402 - torne - Mon Jul 12, 2004 1:26 am

-O3 will inline functions automatically where it thinks it will be useful; -O2 will only inline functions with the 'inline' attribute (and even then it will only do it if it thinks it will be useful). All other optimisation parameters are the same. -O3 has a tendency to increase code size a bit more than -O2 because of the inlining.

If you have a development enviroment that has the 'info' command (a Cygwin install, for example) then 'info gcc' will give you the full compiler documentation, which explains which switches are included in which optimisation setting, and what those switches do. If not, you can get it on the web at http://www.cs.utexas.edu/users/UTCS/online-docs/info2html/info2html.cgi?(gcc)Top (and many other places; google is your friend).