Language:
switch to room list switch to menu My folders
Go to page: First ... 30 31 32 33 [34] 35 36 37 38 39
[#] Tue Feb 13 2018 14:50:11 EST from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


Wait... Python? Seriously? I thought it was R.

[#] Wed Feb 14 2018 12:34:46 EST from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

There are a few high-profile people (esr is one of them) who claim that Rust and Go are now where it's at. But we've heard that song before. Python is still the perennial favorite among open source types.

[#] Wed Feb 14 2018 13:56:16 EST from pandora @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

So, somebody wrote a tool to do exactly what I wanted in Go. (The tool is Vuls.) I got an ansible playbook written to deploy it, [29~and then as part of our DR testing, I blew the server away, and ran the installation playbook again. It wouldn't install, because the new version of go wouldn't work with the whatevert, and it was just a big hassle if I was going to have to update the playbook anytime there was an update.

[#] Wed Feb 14 2018 14:24:25 EST from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


I've heard this assertion concerning Go, although not Rust.

Although, I haven't had my ear as close to the cacaphony as perhaps I should.

At the end of the day, I don't generally care about the language requirements... I'll work with whatever. Although, I might draw the line at Perl. A line that looks suspiciously like:

#)%(*#&)*#$*@#%)*&#%)@*@#&)$*#*%)(%^*(*#)$*@#)$

[#] Wed Feb 14 2018 14:59:49 EST from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


No ansible here, I guess we've settled on terraform as (what I assume to be) an equivalent

[#] Fri Feb 23 2018 09:28:05 EST from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


So, there's this thing.. the 'debian policy' document. You can find it here:

 ehttps://www.debian.org/doc/debian-policy/

It's supposed to help clarify lots of information related to creating debian packages.

Skip down to 2.2.2., "The contrib archive area", and witness how well they clarify things.

"Every package in /contrib/ must comply with the DFSG."

No link there on the acronym "DFSG".

Imagine someone not skilled in the art of creating debian packages (say... me?) stumbles into a document with this in it. Would they think to look up at 2.2.1. and find that the acronym means "Debian Free Software Guidlines"?
Would they think to click on the [3], jumping to some random area of the document where there's another link to 'REJECT-FAQ' "which details the project's current working interpretatin of the DFSG"? Would they continue through the labyrinth to follow that link to the 90s-esque web page that kinda makes all of this feel more like a legal document and less like a technical one?

Or would they just toss their hands up in the air, quietly scream inside, and just do whatever the fuck they want anyway?

[#] Fri Feb 23 2018 10:28:54 EST from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


Developers can't read or write English, particularly the aspies that dominate free software development, so I'm leaning towards "do whatever the fuck."

[#] Sun Feb 25 2018 21:49:22 EST from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

I'm still waiting for Stallman to start complaining that I modified the version of the GPL that I use, even though it says "changing it is not allowed." 



[#] Thu Mar 29 2018 08:29:09 EDT from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


Gotta build everything to work in RedHat now.

So far, not a trivial undertaking, but probably easier than trying to make this work for Solaris.

Using CentOS, since I don't want to pay for build systems... I expect the binaries should still work on RedHat (but will test to make sure). And one of the first things I discovered is that you have to yum-install the static libraries separately from the -devel packages... unlike in Debian. I find that annoying, but I guess I understand. Also, they don't provide a static library for OpenSSL that will actually work (it tries to drag in a bunch of kerberos nonsense that you don't otherwise need). So, you have to compile it yourself... which if I'm going to do that, I may as well get the latest version and work with that, to address all the subtle security concerns.

Also unlike Debian, there's no way to specify he RPM packaging system offers no consistent way to configure the package upon installation.
You are specifically disencouraged from asking the installer for any information ... which I understand, as many administrators would want to install remotely, etc., but they offer no alternative to specify what you might have wanted to state during the installation (which you can do with Debian packages).
So, yeah, no consistency there. Any configuration would have to happen post-install, and the installer would need to know what to do to make that happen. I might work around this by echoing a script the installer can call to configure the installation, so at least it isn't a mystery.

Is that the normal way of doing things with RPM? After install, write to the console something like, "To configure this installation, run /usr/local/bin/configure_my_tool "?

[#] Thu Mar 29 2018 12:32:41 EDT from kc5tja @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

That seems pretty normal. Can you perhaps build a "meta-package", which contains a program that asks for necessary details then yum-installs specifically pre-configured packages based on the information provided? Or which patches configuration after a specific yum-install completes?

It's not as integrated as Debian's approach, but it at least might make things easier for those who don't want to slog through dependency hell.

[#] Thu Mar 29 2018 14:19:49 EDT from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


I'm already doing Special Things to avoid dependency hell, so I don't expect I need to worry about that muc.

This is a matter of ensuring that the system is configured properly before they go to use it. I come from a Windows background, where I expect the user to have an intellect closer to that of a gnat than a well-paid engineer, so it's possible I am overthinking things.

Except... I am not dealing with well-paid engineers working on Linux. I'm dealing with instructors, who ought to know better, but might not.

[#] Thu Mar 29 2018 14:35:01 EDT from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


The post-configuration required for this tool is an IP address to a machine within the environment to which information will be pushed, given the nature of this tool.

Further configuration is up to the user... but that one tiny bit of information is critical to getting this to work at a base level.

So, no, a meta-package won't quite cut it. Unless, maybe, I build a meta-package for every conceivable IPv4 address out there. Heh.

[#] Mon Apr 02 2018 12:42:43 EDT from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


time for docker?

[#] Mon Apr 02 2018 15:03:59 EDT from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


Hmmm... not sure docker will allow us to do some of the weird things we need to do.

We need, for example, to see what a student has typed at a command prompt, and also report the tool's output. It gets weirder if that tool happens to be something like meterpreter.

[#] Mon Apr 02 2018 15:05:07 EDT from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


(And it still wouldn't solve our general problem of needing to forward that information to the same place for all the machines in the virtual environment).

[#] Fri Apr 06 2018 14:45:25 EDT from kc5tja @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Docker is never the solution. Except when it is. Choose wisely.

[#] Mon Apr 09 2018 06:29:27 EDT from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


I think Docker would at least partially solve a different problem that we have, where we kinda want multiple instances of our server on a single machine.
Thing is, we'd need to build something into the host machine to help manage the multiple servers... something that knows about the multiple nature of itself, and translates between clients accessing it, and the various servers, to intelligently assemble or disassemble messages appropriately.

But, we haven't started down that road yet. There's still time to consider alternatives, heh.

[#] Mon Apr 09 2018 06:38:40 EDT from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


On another topic, I recently learned about the rpath linker option for GNU compilers.

Apparently, in closed-source shops that want to provide binaries for Linux, this tool lets you link to the libraries you want, and include them as part of the packaging for the overall product in such a way that you can link to a known environment.

The rest involves compiling on the oldest Linux you want to support to ensure that most kernels will support your binary.

Does anyone here have experience with that? Because if I can reduce my build machines to just one or two machines for Linux, I'd rather like that.

[#] Mon Apr 09 2018 16:28:24 EDT from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


The question is, are you using -rpath for a shlib that's part of a standard baseline linux, or something that can't be assumed to exist (in a compatible version)

You need to have good knowledge of your shared library dependencies/ecosystem. Otherwise this question is not phrased in the form of a question...

Anyway, think twice about supporting more than 1 or 2 recent Linux distros, and perhaps think about shipping a virtual appliance (either docker image or traditional virt) to manage dependencies?

A customer with the resources to install and support Linux, probably also has the resources to install and support a *recent* Linux.

[#] Tue Apr 10 2018 08:16:56 EDT from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


Trying not to reveal overmuch what we do, but:

1. We *must* work on a 10-year old distribution of Linux, because of the sort of ways this software will be used.
2. At least one of the things we do is accomplished using techniques that won't work very well if it has to go through a docker layer. This said, I'd like to accomplish the same thing another way, by taking advantage of the fact that Linux is open-sourced, and we can alter the original sources to do things we need, as long as we give those sources back to the community (who may keep it or throw it away, won't matter to us).

At the end of the day:

* I want to reduce the number of build machines we must have to build our software.
* I need this to work on as many distributions of Linux to cover at least 80% of use cases (hence debian & redhat).
* I need this to work on at least one 10 year old distribution of Linux.
* The package I create should not have any dependencies... it should be capable of standing alone.

If only we were open-sourced... this would be easier. We'd just distribute the sources and be done with it. But we're not... we have to distribute binaries.
So it's painful.

Definitely learning a lot about Linux, though, as a consequence of these requirements.

If I could address #2, I might be able to do this in a docker instance. I think. Maybe. It'd be weird, I think, but it might be possible. But running in a docker instance probably won't address #1.

Go to page: First ... 30 31 32 33 [34] 35 36 37 38 39