I've been doing... things... with makefiles.
I have a number of binaries we need to create for this client. Currently, a script iterates over each folder, executing make, building the binaries.
It's kind of ... stupid. As in, really annoying to handle.
I feel I should use make itself to do this kind of thing. I mean, it's apt for this work, right?
So, I started researching what it would take to do this stuff with recursing Makefiles, the way a lot of projects do it, when I saw that that's a Very Bad Idea.
It's a bad idea for a few reasons, but maybe the biggest one is the performance hit it creates, recalculating crap all the time.
It's smarter to build a single Makefile that includes other makefiles.
It's also weirder, because you have to think about folders differently, and the potential for clashing variables, etc. But, it has a charm, in that you can establish a variable in the primary Makefile, and use it in the included makefiles to help drive things. It forces you to think about what's common throughout the product, versus what's specific to just your project, and write the included makefile accordingly.
This appeals to the part of me that hates wasted effort. I was originally looking at having to copy a kind of prototype makefile around and modify it, but taking this approach, I just address the differences.
I guess I'll never get away from dealing with build systems and setup.
Funny you mention that fleeb, I've been doing something similar, but for different
reasons. Like all sensible people, I've had it up to here with GNU Autotools;
they're too complex, handle use cases that haven't been relevant in decades,
and support terrorist regimes (ok well maybe autoconf doesn't, but RMS does,
and I still don't like all the damn M4 macros).
So I've started builing my projects with a setup I occasionally call "conf-IG-ure"
:)
It starts with a simple Makefile, that has the line "include config.mk" in it. There is no config.mk included. Later in the Makefile there's a directive that says to create config.mk, run ./configure. "configure" is written in plain old shell script, no M4, no autotools crap. It accepts some of the more common directives (prefix, etc) and then generates config.mk.
So it works either way; if the user runs configure first, it works; if the user runs make first, it works; either way it's simple, gets done in just a few screenfuls of text, and did I mention no M4?
No, it doesn't work on a 36-bit BoardBox IV from 1990 running Oddball/IX v11 with the Lattice C compiler. It works on the systems people are actually using in real life.
So I've started builing my projects with a setup I occasionally call "conf-IG-ure"
:)
It starts with a simple Makefile, that has the line "include config.mk" in it. There is no config.mk included. Later in the Makefile there's a directive that says to create config.mk, run ./configure. "configure" is written in plain old shell script, no M4, no autotools crap. It accepts some of the more common directives (prefix, etc) and then generates config.mk.
So it works either way; if the user runs configure first, it works; if the user runs make first, it works; either way it's simple, gets done in just a few screenfuls of text, and did I mention no M4?
No, it doesn't work on a 36-bit BoardBox IV from 1990 running Oddball/IX v11 with the Lattice C compiler. It works on the systems people are actually using in real life.
Did you know your makefiles can have functions built into them?
Oddly, I found that feature useful.
https://coderwall.com/p/cezf6g/define-your-own-function-in-a-makefile
You might also find this article interesting:
http://nuclear.mutantstargoat.com/articles/make/
"The purpose of this document is to explain how to write practical makefiles for your everyday hcks and projects. I want to illustrate, how easy it is to use make for building your programs, and doing so, dispel the notion that resorting to big clunky graphical IDEs, or makefile genertors such as autotools or cmake, is the way to focus on your code faster."
And, I suppose, I should follow all of this up with Peter Miller's paper on why he regards recursive makefiles as 'harmful':
http://aegis.sourceforge.net/auug97.pdf
Ironically, most of the "fancy" Makefile tricks require GNU Make, which you
need to use in order to have Makefiles robust enough to avoid the GNU Autotools.
It could be argued that if you've made a configuration language complex enough that you need another program to write it for you, then you've probably already failed.
It could be argued that if you've made a configuration language complex enough that you need another program to write it for you, then you've probably already failed.
2018-04-23 14:46 from fleeb @uncnsrd
So, you have a pool of threads, that occasionally gets hung because
you have a situation where a particular kind of job isn't asynchronous?
Couldn't you convert that job into an async job?
No to both questions.
The thread pool gets hung because of circular dependencies between tasks. The fix is not to make either of those calls asynchronous (can't, because by their nature they are queries with replies), but to change which network service is responsible for one of the queries, and eliminate the circular dependency.
Probably one of those things where the call graph has grown in complexity beyond the point at which you can see the circular dependency. Hm.
If people were to actually look at the logs now and then... they'd be like, "wait, A is getting tons of timeouts waiting for B. B is getting tons of timeouts waiting for A. Lightbulb!"
But people don't look at the logs.
That's kind of interesting... is there a way to have a machine plow through the logs and provide a kind of graph to visually represent this?
Or is there even an interest in showing such things?
2018-04-30 06:00 from fleeb @uncnsrd
That's kind of interesting... is there a way to have a machine plow
through the logs and provide a kind of graph to visually represent
this?
We used to have Newrelic, which does that at a high price-point. Eventually maybe we'll have Amazon X-Ray, which is supposed to be sorta similar, but requires more manual integration.
For now, we just have log files. And Splunk. So one has to put two-and-two together oneself. But one has a fancy search engine to help.
A fancy search engine which drops big index blobs on the floor all the time.
I guess this kind of programming effort wouldn't be thought of as 'useful', since it doesn't directly contribute to your primary mission.
Hence why you'd go with a 3rd party solution.
Still, it makes me wonder if there's an open source effort for this kind of thing that has any quality to it. Probably something integrated with syslog output that keys off of regex or something...
Hrm...
Yeah, so this Java thing that frustrates me...
Jenkins is built on Java.
Fine, good, no problem, but they insisted on upgrading to Java 8.
This means if you want Jenkins to work with your slave, the slave has to run Java 8.
I have a situation where I need to run a slave that's ridiculously old. I noticed I'm not the only one, either. I can't use Java 8. I can barely use Java 7.
So, I can't upgrade Jenkins.
I'll note that I can use the most recent version of C++ on this slave, and it'll merrily run my code without any issues if I'm smart about it. Can't do that with Java.
So, yeah. That's exactly what we want in a build system.
jenkins is a bit big and complicated, but it has the ability to run arbitrary tasks, so you should theoretically be good...
we had to migrate off the maven style projects and onto Freestyle projects for certain builds that weren't compatible with Java 8 (because we're actually compiling Java code, y'know, and this matters) but you should be able to do just about anything if you migrate to Freestyle or Pipelines or whatever else is supported and widely used and works for you.
Perhaps.
I could establish an nfs mount point such that files pulled from source control on the one machine get placed in another machine, then set up a command to ssh to the machine and perform the build. It just feels like bailing wire and duct tape to do that instead of getting Java 8 running on the machine or something.
I may do something like that, though. I hate having this thing so terribly out of date. Further, I hate that whomever installed this decided to use 'Linux In A Box' instead of something like Debian, Ubuntu, or CentOS (basically, something mainstream with steady security updates and no additional cruft).
"This" being our Jenkins server. I have a lot of things I'd like to change about what we do there, all requiring I learn more about Jenkins, ultimately.
I feel like I spastically threw everything together in a rush to ensure that we had steady builds, such that I look now, a couple of years later, and wonder what-the-fuck, and I-can-do-better-than-this.