gbadev.org forum archive

This is a read-only mirror of the content originally found on forum.gbadev.org (now offline), salvaged from Wayback machine copies. A new forum can be found here.

Coding > Location arm-elf-gcc places object files?

#27734 - Abscissa - Wed Oct 20, 2004 5:30 am

Unless it's just too late for me to be thinking straight, it seems that arm-elf-gcc (and for all I know, maybe this is the same with normal gcc as well) always places the object files it generates in the current directory rather than the directory the source file is located in.

ie:

(path here)arm-elf-gcc -c .\subdirectory\file.c

After that, file.o is located at ".\file.o" rather than ".\subdirectory\file.o".

Am I either nuts or missing something obvious? If not, is there any way I can tell the compiler to knock it off? :)

#27735 - sajiimori - Wed Oct 20, 2004 6:09 am

Use -o to specify an exact path+filename for the output file.

#27737 - Abscissa - Wed Oct 20, 2004 7:18 am

Well, I'm using a makefile batch-mode inference rule. Multiple C files -> Multiple O files. The -o switch wouldn't really work for that would it? Hmm, I just tried using a wildcard with -o, but it says that I can't use -o with -c. That makes sense.

I'd like to be able to assume that the source files could be in any directory (or at least any sub-directory). The way I have things working, it would be very difficult to get the linker to find the object files if they're not in the same directory as the matching source file. It may be a little easier if I didn't use the batch-mode inference, but I'd rather not have to resort to that.

BTW, I'm using the NMAKE that comes with MSVC 6.

#27738 - pan69 - Wed Oct 20, 2004 7:41 am

Quote:
Am I either nuts or missing something obvious? If not, is there any way I can tell the compiler to knock it off? :)


No you're not nuts, but you're missing something obvious. You should start GCC in the directory your source file is located (make sure your GCC bin directory is added to you path variable ofcourse), and not the otherway around. GCC puts it's output files in the directory where GCC was started, so in your case the the GCC directory.

- Pan

(btw: I'm using arm-elf-gcc too, it works perfectly)

#27754 - Abscissa - Wed Oct 20, 2004 5:25 pm

pan69 wrote:
You should start GCC in the directory your source file is located (make sure your GCC bin directory is added to you path variable ofcourse), and not the otherway around. GCC puts it's output files in the directory where GCC was started, so in your case the the GCC directory.


Shoot, I was afraid I'd have to do that. Normally that's not a big deal for me, but I was hoping to get my makefile to allow the source files to be in arbitrary directories.

I guess a little background is in order: I'm working on an updated version of the GBA AppWizard. That doesn't seem to have been touched in ages (the latest version assumes DevKitAdvance), and I think it could use a some work by now. I plan to make a VS .NET version as well. And possibly options to support Ant or NAnt buildfiles (I've been having more of an interest in Ant latey). This GBA AppWizard I'm working on also ties in some of the functionality from VCMake - specifically, parsing of project files to generate a list of source files that's consistent with MSVC's FileView. As a side note, I've been beginning to understand the reasons behind many of the old GBA AppWizard's and VCMake's limitations ;)

The reason that's relevant is that I wanted to allow the user of the AppWizard to arrange a hierarchy of source directories however they see fit. You could have them all in the project's base directory, have them all in a \src subdirectory, or for very large projects have \src\engine\, \src\audio\, \src\ai\, or whatever. Then my VCMake-equivilant would grab the sources to be built from the project file and give the makefile a list of them regardless of their physical directory.

It seems that the best thing for me to do at this point would be to add functionality to my VCMake-equivilant so that it will group the source files by their directory, and make a list of the different directories so that the makefile can somehow compile just one directory at a time, cd'ing in between.

#27757 - sajiimori - Wed Oct 20, 2004 6:42 pm

You can use -o with -c. Put the output filename after -o, and the input filename after -c.

You can also write a generic make rule that will produce the output file in the same directory as the input file. In the rule body, use $< to refer to the input file and $@ to refer to the matching output file.

#27760 - Abscissa - Wed Oct 20, 2004 7:35 pm

sajiimori wrote:
You can use -o with -c. Put the output filename after -o, and the input filename after -c.

You can also write a generic make rule that will produce the output file in the same directory as the input file. In the rule body, use $< to refer to the input file and $@ to refer to the matching output file.


That only works if I only send one source and one output file at a time. arm-elf-gcc only accepts one output file. I just tried these from the command line:

"-o file1.o file2.o" makes gcc take file2.o as an input file.
"-o file1.o -o file2.o" makes gcc ignore the "-o file1.o" portion and take only file2.o as the output.
"-o "file1.o file2.o"" makes it think that the output should be a single file that's named "file1.o file2.o"
"-o *.o" doesn't work because it doesn't expand wildcards in the -o directive and thinks that you want a file named "*.o"
(For each one of those, I also had "-c file1.c file2.c")

I could just have the makefile compile each source with many individual calls to gcc instead of one lump "gcc -c file1.c subdir\file2.c otherdir\file3.c" like I want to do, but then I'd loose the benefits of the single lump call. Although I'm willing to split it into a single call per-directory, running gcc for each individual file would be inefficient.

I did just notice that I can compile multiple sourcefiles into a single object file, but then I'd have to recompile all the sources whenever one of them changes and that would defeat the purpose of using makefiles.

#27764 - tepples - Wed Oct 20, 2004 9:02 pm

Abscissa wrote:
I could just have the makefile compile each source with many individual calls to gcc instead of one lump "gcc -c file1.c subdir\file2.c otherdir\file3.c" like I want to do, but then I'd loose the benefits of the single lump call. Although I'm willing to split it into a single call per-directory, running gcc for each individual file would be inefficient.

You claim that the setup and teardown of GCC processes when you compile one source code file at a time makes the build process inefficient. Have you actually timed it? Even then, sometimes you have to trade off some build processing time inefficiency for the increased developer efficiency that a more organized tree brings.
_________________
-- Where is he?
-- Who?
-- You know, the human.
-- I think he moved to Tilwick.

#27765 - sajiimori - Wed Oct 20, 2004 9:23 pm

Quote:
I did just notice that I can compile multiple sourcefiles into a single object file, but then I'd have to recompile all the sources whenever one of them changes and that would defeat the purpose of using makefiles.
Umm... aren't you already defeating the purpose by having gcc recompile everything at once?

#27766 - Abscissa - Wed Oct 20, 2004 9:34 pm

tepples wrote:
Abscissa wrote:
I could just have the makefile compile each source with many individual calls to gcc instead of one lump "gcc -c file1.c subdir\file2.c otherdir\file3.c" like I want to do, but then I'd loose the benefits of the single lump call. Although I'm willing to split it into a single call per-directory, running gcc for each individual file would be inefficient.

You claim that the setup and teardown of GCC processes when you compile one source code file at a time makes the build process inefficient. Have you actually timed it? Even then, sometimes you have to trade off some build processing time inefficiency for the increased developer efficiency that a more organized tree brings.


You're right, I was making an assumption. I guess the thing is, since what I'm making is a tool that's purpose is more for other people to use than for me to use just for myself, I've been more concerned about doing things more-or-less "the correct way" than about the way that's easier for me to impliment. And if I have a bunch of source files, it just seems more preferable to make one call than a whole bunch.

But you may have a point, perhaps build times just don't get very long for GBA programs. I know that for the couple of Windows games I've done, a complete build ended up reaching a couple minutes by the end of development. Of course, that was probably due more to having a lot of code to compile and link rather than setup/teardown of invoking the compiler, although when it gets that long to compile it can be worth it to have it trimmed down. But then on the other hand, if it's only going to make a difference of a few milliseconds, or take it from one minute ten seconds to one minute eight seconds maybe it's not worth it after all.

That's a good point, I think I will do some timings.

To any of our resident pro-developers: about how long does a typical partial build and full build take for you on a GBA app? I know for AAA-titles on the non-portables, full builds can take hours, but I haven't heared anything about build times for these smaller systems.

#27767 - Abscissa - Wed Oct 20, 2004 9:39 pm

sajiimori wrote:
Quote:
I did just notice that I can compile multiple sourcefiles into a single object file, but then I'd have to recompile all the sources whenever one of them changes and that would defeat the purpose of using makefiles.
Umm... aren't you already defeating the purpose by having gcc recompile everything at once?


Not really, I use the '$<' macro which expands to all of the out-of-date dependents rather than just simply all of the dependents. So I'm not really trying to compile all of the sources at once, just all of the out-of-date ones. What I meant in the portion you quoted was just that if I compiled all the sources into a single object file, I'd have to always compile all of them regardless of whether they're out-of-date or not.

#27770 - sajiimori - Wed Oct 20, 2004 10:08 pm

Oh, I wasn't aware of that make feature. Every project I've ever seen compiles each file with its own call to the compiler... but if it works for you, then great! =)
Quote:
But you may have a point, perhaps build times just don't get very long for GBA programs.
Oh, they do... our larger projects can take a good 10 or 15 minutes, and that's with plain C. >_<

In any case, I'm pretty sure the gcc front end just invokes cc1 multiple times, so I wouldn't expect a speed difference.

#27788 - Abscissa - Thu Oct 21, 2004 6:19 am

Ok, I just ran some speed tests, and unless there's a problem with my testing, there seems to be a very noticable difference between compiling files all at once or with separate calls. Almost 2x (Which suprised even me).

Here's what I did: I used the SimpleBGScroll sample from devkit.tk (which has only one source file) as the test file, and ran tests compiling it 100 times. Specifically, I had 100 copies of main.c whose filenames were "gbasrc1.c" through "gbasrc100.c". Then I wrote a short program that that did the timing. First, it timed a very minimilistic makefile that ran gcc once for each of the 100 sourcefiles (The "Single-File" test). It repeated this five times, deleting the object files in between runs, and averaged the results. Then it timed an equivilant makefile (Really the same makefile, only difference is a short preprocessor conditional) that ran gcc once and gave it all 100 sourcefiles at one time (The "Multiple-File" test). This was also repeated five times, cleaning in between, and results averaged.

Now, I don't know if gcc does any sort of caching between files so to attempt to make gcc believe that the 100 copies of SimpleBGScroll's main.c I made were totally different, before I ran the tests I made a tool that used the original main.c as a template and generated files "gbasrc1.c" through "gbasrc100.c" which *almost* just like main.c. The only differences were that I appended a number (1-100) to the names of all identifiers that were in global space (such as functions). For all I know, it might be possible that this didn't fool gcc and it reused stuff in the Multiple-File tests anyway. Does anyone know if this could have happened?

Oh, one other minor change I made to the source file was to comment out the line "#include "r6502_portfont.h"" because that's generated from binary data and I didn't want the tests to include timing of that. This also caused me to have to comment out line number 107 which referenced a variable that was defined in that header.

Here are the timing results from that test on my WinXP 1.2GHz AMD system:
Code:
Results for Single-File Test (Compiling 100 files, one at a time):
Total Time: 87.886 seconds
Average Time: 17.5772 seconds

Results for Multiple-File Test (Compiling 100 files, all at once):
Total Time: 48.871 seconds
Average Time: 9.7742 seconds


(In the above results, I've omitted the timing for each individual run, since they were all within a tenth of a second of each other.)

For completeness, I ran the tests again with 50 source files, and once more with 10 source files. In those as well, the Multiple-File test was nearly twice as fast. Also, as a sanity check, I tried running the tests with only one source file. Since the time it takes to do that is so small I bumped up the number of times I ran the tests from five to 25. The results of that were within two milliseconds of each other, so there didn't seem to be anything unfair between the Single and Multiple file tests.

I did a few other variations as well: When I ran the Single-File tests, NMAKE would display the command line it was running each time a file was compiled. In the Multiple-File tests, NMAKE of course displayed one command line and then all the files were compiles. I was concerned that all of the text output might have been unfairly slowing down the Single-File tests, so I ran it all again giving NMAKE the /S and /C switches to supress that. It didn't make a difference. I also tried using the -pipe and -save-temps flags but neither of those changed the results (although -save-temps slowed everything down by 1.5 seconds).

So, unless the compiler was cheating in the Multiple-File test (maybe it was?), calling gcc once with 100 files was almost twice as fast as calling it once for each file.

#27789 - sajiimori - Thu Oct 21, 2004 6:55 am

Very interesting! I'm gonna try this tomorrow. If I get similar results, I'll probably change our process over -- it's no big deal to run gcc from the project's /obj directory, and it doesn't prevent using multiple directories for source files, which is the part you want to be organized anyway.

#27792 - allenu - Thu Oct 21, 2004 7:06 am

Abscissa wrote:

To any of our resident pro-developers: about how long does a typical partial build and full build take for you on a GBA app? I know for AAA-titles on the non-portables, full builds can take hours, but I haven't heared anything about build times for these smaller systems.


I'm just guessing here, but I don't think build times for an embedded system like the GBA should take very long unless you're dealing with a complex system that uses many source files or lots of source containing raw data to parse through.

#27801 - Abscissa - Thu Oct 21, 2004 3:32 pm

sajiimori wrote:
Very interesting! I'm gonna try this tomorrow. If I get similar results, I'll probably change our process over -- it's no big deal to run gcc from the project's /obj directory, and it doesn't prevent using multiple directories for source files, which is the part you want to be organized anyway.


Let me know what your results are either way. I'm really interested to see if GCC actually noticed that all my files were almost identical.

I'm also curious, you said your projects's full compiles can be around 10-15 minutes. How much of that time is compiling, vs gfx/audio/etc assets, vs linking? My assumption would be that the data asset stuff would take longer than the compiling for a commercial-sized game, and that linking would be fairly quick.

#27809 - allenu - Thu Oct 21, 2004 5:38 pm

Abscissa wrote:
sajiimori wrote:
Very interesting! I'm gonna try this tomorrow. If I get similar results, I'll probably change our process over -- it's no big deal to run gcc from the project's /obj directory, and it doesn't prevent using multiple directories for source files, which is the part you want to be organized anyway.


Let me know what your results are either way. I'm really interested to see if GCC actually noticed that all my files were almost identical.

I'm also curious, you said your projects's full compiles can be around 10-15 minutes. How much of that time is compiling, vs gfx/audio/etc assets, vs linking? My assumption would be that the data asset stuff would take longer than the compiling for a commercial-sized game, and that linking would be fairly quick.


I wonder, how do most people integrate their raw data into their final build? Do they convert to C or assembly files, do they INCBIN, or do they convert directly to object files? I just got started in GBA development a few months ago and found that going through the C or assembly file route was just too messy. It's raw data, so why convert to a C file that has to be parsed to end up with raw data again? Converting to object format was a lot faster (after the initial messiness of figuring out how to do it, of course*).

Right now, I have a tool that takes all the raw data files I want and turns it into one large raw data file with internal pointers to all the "assets", which I then convert to an object file that I link in with the rest of the build. I find it works pretty good.

* I should add that figuring out how to convert a raw binary file to an object file was tough to figure out at first b/c there's barely any info out there on the web on how to do it, plus one of the pages that describes it got the procedure wrong in one spot!

#27810 - tepples - Thu Oct 21, 2004 5:40 pm

In those (older) versions of Gas that don't support incbin, converting to assembly language compiles a lot faster than converting to C. But mostly I use GBFS.
_________________
-- Where is he?
-- Who?
-- You know, the human.
-- I think he moved to Tilwick.

#27811 - sajiimori - Thu Oct 21, 2004 6:28 pm

Quote:
I'm also curious, you said your projects's full compiles can be around 10-15 minutes. How much of that time is compiling, vs gfx/audio/etc assets, vs linking?
I was thinking about compiling source files and linking, where the link step probably takes less than 5% of the total time from a clean build, or maybe 95% of the time if only one source file has changed.

With assets, it's more like 30-45 minutes for our largest projects.

We convert binaries straight to object files. The resulting symbols use a different naming convention than regular symbols, so there's no namespace pollution. The main problem is that you have to include headers that declare the symbols, so partial recompiles are needed when those headers change.

#28075 - Abscissa - Tue Oct 26, 2004 7:04 pm

I've been getting very concerned about the possibility of my timing test results being skewed by the source files being so similar, so I ran some new tests on my MyRobot project from the PDROMS 2.5 compo (the biggest GBA app I have right now: 15 C files plus headers) instead of using the old "many copies of nearly-identical source files" approach.

The results from the previous tests put "calling gcc once with multiple sources" at around 60%-80% faster than "calling gcc once for each file", depending on the variation of the test I used. In this new test, the results weren't quite as impressive: the "multiple sources" was still faster but only by about 15%-20%.

Two conclusions from this:
1. gcc does seem to notice simlarities between source files, and I beleive my old test results were, in fact, somewhat skewed by that.
2. It is, in fact, faster to call gcc once with many source files rather than once for each source, just not as much faster as it had seemed. Although, it should still make a noticable difference on large projects that take more than a minute or so to compile.

#28167 - SmileyDude - Wed Oct 27, 2004 4:27 pm

the best way to get a speed increase with gcc, IMO, is to use the -j option on make. For a single CPU machine, using something like -j 2 or 3 would yield a nice improvement. For multiple CPUs, I typically do (CPUs x 2) + 1 as my value to -j.

The reason this typically works is because compiling isn't CPU bound as much as it is I/O bound. So, by starting multiple instances of GCC, you can usually see an increase in overall speed because one instance is busy loading while the other instance is doing that actual compile on the CPU.

Another way to improve compile times would be to use something like distcc to spread out the jobs to other machines. I don't know how well distcc would work for cross-compile builds or on Windows, but it's worth a look if you have a few machines on your network to build on.
_________________
dennis

#28190 - Abscissa - Wed Oct 27, 2004 7:07 pm

SmileyDude wrote:
the best way to get a speed increase with gcc, IMO, is to use the -j option on make. For a single CPU machine, using something like -j 2 or 3 would yield a nice improvement. For multiple CPUs, I typically do (CPUs x 2) + 1 as my value to -j.

The reason this typically works is because compiling isn't CPU bound as much as it is I/O bound. So, by starting multiple instances of GCC, you can usually see an increase in overall speed because one instance is busy loading while the other instance is doing that actual compile on the CPU.


Hmm, I just looked and NMAKE (At least the version shipping with VS6) doesn't seem to have an equivilant to that. Since what I'm working on is an AppWizard for Visual Studio, I'd prefer to stick with the make that comes with VS instead of requiring extra stuff. Although, I may do some speed tests on that, and if it's a big enough difference, maybe I'll look into including GNU Make with the AppWizard.

Just occurred to me: If you have multiple instances of GCC runnng with -j, then if two of them report errors/warnings at the same time, do you ever end up with race conditions turning the error reports to garbage? Or is there built-in protection against that?

#28194 - sajiimori - Wed Oct 27, 2004 7:13 pm

Our speed gain here (when changing to single-invocation) was about 40%. Thanks again! =)

#28197 - tepples - Wed Oct 27, 2004 7:23 pm

SmileyDude wrote:
the best way to get a speed increase with gcc, IMO, is to use the -j option on make. For a single CPU machine, using something like -j 2 or 3 would yield a nice improvement. For multiple CPUs, I typically do (CPUs x 2) + 1 as my value to -j.

I get this:
Quote:
make: Do not specify -j or --jobs if sh.exe is not available.
make: Resetting make for single job mode.

So don't try it if you're going through command.com or cmd.exe rather than the MSYS shell.
_________________
-- Where is he?
-- Who?
-- You know, the human.
-- I think he moved to Tilwick.

#28198 - Abscissa - Wed Oct 27, 2004 7:23 pm

sajiimori wrote:
Our speed gain here (when changing to single-invocation) was about 40%. Thanks again! =)


Wow, excellent. Your welcome :)

#28201 - poslundc - Wed Oct 27, 2004 7:32 pm

SmileyDude wrote:
the best way to get a speed increase with gcc, IMO, is to use the -j option on make. For a single CPU machine, using something like -j 2 or 3 would yield a nice improvement. For multiple CPUs, I typically do (CPUs x 2) + 1 as my value to -j.

The reason this typically works is because compiling isn't CPU bound as much as it is I/O bound. So, by starting multiple instances of GCC, you can usually see an increase in overall speed because one instance is busy loading while the other instance is doing that actual compile on the CPU.


We use a default of -j4 in our office, and it drives me nuts because on my computer (which is still in the GHz range but still far slower than the CTO's, who also has multiple processors) it completely bogs down the system so doing things like rendering a window on the screen takes about a minute, with no real appreciable speed gain in the compilation. (It maybe takes five minutes off of an hour or so.) I use -j2 instead... so I can at least surf the net while the compile is going on.

Dan.