Huh. I tried valgrind, and it has no trouble reading the symbols, despite gdb's problems. Weird.
It's complaining about __libc_write, saying that write(buf) points to uninitialised bytes. Some underlying library is doing that, but this is a start.
The two uninitialized bytes were in '__pthread_initialize_manager' and 'pthread_create'.
I suspected I screwed something up building this compiler. This kinda points to that, if I'm understanding this correctly.
Ugh. I can either try rebuilding the compiler (again), or punting. Soo tempted to punt.
I learned today that it isn't necessarily wise to valgrind g++.
Sometimes, curiosity can't kill the cat, but can put it into a vegetative state.
seems to be faster than valgrind...
however, you got it sorted out - sort of...
Ugh... requires recompiling the code. I'm not sure I like that. One of the libraries (it's a set of libraries, actually) takes a good while to compile, and seems to be the source of some of my problems.
That said, I can work with the library in a headers-only fashion, which would allow me to work with this asan thing, so.. I dunno.
Still, valgrind was fast enough. But thanks... that's another tool I can use sometime if needed.
I have elected to move on for the moment... and I have discovered that libc-2 has enough differences between 2.13 and 2.15 to lead to seg faults.
not sure what you're using valgrind for - either it's leak tracing, or heap overrun debugging, or something else that overlaps with some othe r tools out there.
there are many tools that COULD MAYBE be used to analyze heap overruns or leaks, including but not limited to:
* taking your binary, compiled for some ancient system, and running it on a more modern system where newer tools are functional
* The BSD "dbx" debugger is available for Linux from http://www.oracle.com/technetwork/server-storage/solarisstudio/overview/index.html and includes a check leaks mode: see http://www.oracle.com/technetwork/java/javase/memleaks-137499.html#gbyza for a JVM-centric example on leak checking. I have had mixed results with this. It looks promising, but tends to freeze the JVM. Simpler programs may have less problems - the JVM does a lot of low-level tricks.
* libumem, if you happen to have access to a Solaris box
* libnjamd - used to exist on Linux, seems to be dead and unsupported now
* libefence - ElectricFence library. The classic on linux until valgrind. Can be useful for both leaks and heap boundaries
* libduma - This is a fork of libefence, also works on windows, might be more widely distributed these days on Linux
* glibc's own facilities include mtrace, MALLOC_CHECK_, __malloc_hook, mcheck, and various environment hooks to the same. mtrace may not be thread-safe(?)
all of these tools have problems, none are perfect, the way I see it.
valgrind memcheck also gives you checks for uninitialized access on values - which is quiet usefull to find situations where if's do random stuff.
however, its got the biggest performance overhead.
tcmalloc not only promises to be faster than libc's malloc, it also has some heap profiling stuff & double free checks etc - however it dosn't deliver estimates where the mem you access was free'd in advance.
If I compile a binary on an ancient system, then move that binary to a modern system, wouldn't it have other problems? I thought Linux didn't really support that level of upward compatibility.
Valgrind at least helped me zero in on the lines of code where problems manifested.
Oddly, it seemed to interpret the debugging symbols better than gdb.
I do have access to a Solaris box, but it's similarly ancient. And, yeah, I have to try to support the damned thing. I must say, when you're forced to work with extremely old versions of operating systems, you learn a lot about the operating system. It might be a trial by fire, but this kind of study is like a crucible for learning more about flavors of posix-oriented operating systems.
I'm blaming the compiler. The product works properly on other environments, just not this old thing with the dubious compiler. I think, before I spend too much time investigating this further, I should consider compiling the oldest compiler that works with our product, and use that for building the system. Hopefully, *that* compiler will be old enough that the OS won't object to it (that is, that I won't have to make any alterations at all just to get the compiler to build properly).
If I compile a binary on an ancient system, then move that binary to a
modern system, wouldn't it have other problems? I thought Linux didn't
really support that level of upward compatibility.
If all it needs is libc & libcstd++ it should work OK. Compatibility with the system would be fine - compatibility with the debugging tools is another question entirely.
So it depends on your library dependency footprint.
I'm blaming the compiler. The product works properly on other
environments, just not this old thing with the dubious compiler. I
think, before I spend too much time investigating this further, I
should consider compiling the oldest compiler that works with our
product, and use that for building the system. Hopefully, *that*
compiler will be old enough that the OS won't object to it (that is,
that I won't have to make any alterations at all just to get the
compiler to build properly).
I don't know, there could be binutils issues as well. Frequently newer GCC requires newer binutils, but then the output of the newer linker might have issues on the older kernel/ld.so/glibc stack. I
'm not really knowledgeable about those areas... if you can limit yourself to a gcc that works with the binutils that originally shipped on the system, you may have a better chance of success.
Oh, and binutils on Linux always been funky. Periodically, Ted T'so sends an announcement of "here is the latest version of The Linux Binutils", and the gcc folks duly complain "why are there so many patches against upstream?"
I did update the binutils, since the one on the system wasn't compatible with the updated compiler.
So, yeah, that's probably the problem.
I might be on the right track, then, with an older compiler. Hopefully, I can find one that's new enough for the code I use, yet old enough for this machine.
Ugh... this is like Goldilocks.
"And this compiler was *just* *right*."
The compiler, the compiler, the compiler is on fire!
We don't need no water -- let the motherfucker burn!
BURN, MOTHERFUCKER, BURN!
That's a joke.
I can't get Jenkin's client to run on this Valhalla Redhat machine because I can't install a new enough version of Java on it.
2014-08-07 13:38 from IGnatius T Foobar @uncnsrd
Valhalla, NY is just a couple of miles away from here. You want me to
drive over and try it?
If you were a god, I could suggest "Entrance of the Gods into Valhalla" as the accompanying music.... <evil grin>
I am not sure if this is because of something special we're doing with LD_PRELOAD, but it looks like statically linking libs into your shared objects causes the shared object to no longer preload.
I am not sure if this is because of something special we're doing with
LD_PRELOAD, but it looks like statically linking libs into your shared
objects causes the shared object to no longer preload.
Hmm... I'll look into that. That doesn't look familiar.
Fiddly, getting the right command line arguments.