Ya, living in the Midwest does come with its inherent weather issues. ( and explains why 1/2 my career has revolved around the automotive industry )
I've not *noticed* a single power outage since I moved to Arizona, although I'm certain at least ONCE I came home to flashing clocks set at 00:00 in the kitchen.
Mon Feb 22 2021 12:56:29 EST from Nurb432Ya, living in the Midwest does come with its inherent weather issues. ( and explains why 1/2 my career has revolved around the automotive industry )
Lived in an apartment building for a bit, 12 noon power would blip. Every weekday. But not weekends.
They never would do anything about it. Just a blip tho so more annoying than anything else. Got a UPS for my computer. When i moved out, i had just over a 3 year up-time. I actually left it running and carried it out with the UPS attached to see if i could pull it off. I didnt quite make it to its new home before the juice ran out.
Oooh, the 3000 series, running MPE, I'll bet ... one of the last of the non-unix minicomputer operating systems.
I almost feel sorry for people who have such large investments in legacy environments. No one would launch a new application on AS/400 or Unisys today, but those environments still exist and are still being supported. It's so cute, they even put little web servers on them and let people pretend they're using a modern computer.
Back when I was in college we were stuck with a Burroughs A-9 mainframe. Unisys is apparently still supporting these beasts, and they offer new mainframe models, but they also offer their mainframe emulated on a standard server running VMware. How cute.
Later that year we finished migration of all the serial terminals to PC emulators running actual network back to the DC, so the hard wire RS232 ports were 'dead' at that point. First we went from stupid token ring, to point to point ring, where the 'ring' was in the switches and it used cat5 cable. Actually was pretty cool. Then we added TCP/IP to the token ring.. Then swapped out the ring switches with Ethernet switches. ( and lots of new network cards )
The 'plant' devices and printers, ran IP across coax broadband, shared wire with the TV system. It was a nightmare mixture, until everything finally got migrated to cat5/Ethernet. The plant was built back in the late 50s.. so you can imagine the generations of systems still there.
The next year they migrated to a HP UNIX box, about the same size as the original terminal was for the 3k.
Couple of years later, the parent corporation cut them loose to become independent. Couple of years later they folded as there was no way to charge what it cost to manufacture stuff. They never had to care, as the were 'just another parts plant' and the overall cost at the corporate level was still at a profit. Another 10 years or so and the building isn't even there.. just a lot of gravel, about 2 million square feet of it..
Sat Feb 27 2021 17:37:41 EST from IGnatius T FoobarOooh, the 3000 series, running MPE, I'll bet ... one of the last of the non-unix minicomputer operating systems.
I almost feel sorry for people who have such large investments in legacy environments. No one would launch a new application on AS/400 or Unisys today, but those environments still exist and are still being supported. It's so cute, they even put little web servers on them and let people pretend they're using a modern computer.
Back when I was in college we were stuck with a Burroughs A-9 mainframe. Unisys is apparently still supporting these beasts, and they offer new mainframe models, but they also offer their mainframe emulated on a standard server running VMware. How cute.
I take that back, its more than 2 million. That was just the plant + attached admin building.
There were all the outbuildings, warehouse, parking lots, waste plant, power station. Not to mention all the grass around the edges.
Up at GM where i was at decades earlier, it was several times that. Multiple plants that sized, all mushed together into one enormous town sized structure + several other buildings around the city ( including the chip fab, which was such a cool place ).. From what i hear, they are all gravel now too. Not been up there since perhaps 1991. Nothing there for me after i left. Didn't burn the bridge, but there was no going back after that mistake.
Sat Feb 27 2021 19:52:35 EST from Nurb432. just a lot of gravel, about 2 million square feet of it..
"My team wants a new Q in your ticketing system. We want it to email our shared mailbox, be automatically closed, and then we work the issue from the shared mailbox."
Really? Why even bother?
"[200~Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.[201~"
--jwz
"its asking for domain, what is that?". So he tried to enter the FQDN for the workstations.. A field tech, trying to login to an application..
Glad its Friday and i took the day off. Tho people out in public are extra stupid today too for some reason, about took out a biker. Left hand signal, in a turn lane at a light.. Light turns green he pulls in front of me... then turns a block down. Just about got nailed.
(Hey, when *I* learned routing, it was a protocol you needed to be able to handle. Apollo Domain, along with Appletalk, Banyan VINES, IPX, XNS, DECnet, Source Route Bridging, and of course IP. And these younguns think they have it hard because they have to configure IPv4 and IPv6 on the same device.)
If he was new, i would have been a bit more forgiving. But, with the disclaimer that even a beginner on the field staff should have a base level of knowledge. Not that everyone needs to be an expert in everything of course, but this is pretty basic stuff here.
Hes been with us at least 10 years. We have ~10 domains in our forest, and users in each that he would have supported at some point. This would not be the first time he had to deal with this.. That and he logs in every morning at his desk *with* the domain name.. Only recently could you sign in with your email address, if unique, as we have some dups out there as we cant manage our network properly... and skip the domain part. But few people even know that is an option.
Sat Mar 06 2021 13:12:50 EST from IGnatius T FoobarIt wants you to log in to Apollo Domain.
(Hey, when *I* learned routing, it was a protocol you needed to be able to handle. Apollo Domain, along with Appletalk, Banyan VINES, IPX, XNS, DECnet, Source Route Bridging, and of course IP. And these younguns think they have it hard because they have to configure IPv4 and IPv6 on the same device.)
Maybe it was a brain fart?
I mean... sometimes I do things and afterwards go... "that was fucking stupid."
Other times I do stupid fucking things, and it turns out I was being brilliant.
Sat Mar 06 2021 15:29:17 EST from Nurb432If he was new, i would have been a bit more forgiving. But, with the disclaimer that even a beginner on the field staff should have a base level of knowledge. Not that everyone needs to be an expert in everything of course, but this is pretty basic stuff here.
Hes been with us at least 10 years. We have ~10 domains in our forest, and users in each that he would have supported at some point. This would not be the first time he had to deal with this.. That and he logs in every morning at his desk *with* the domain name.. Only recently could you sign in with your email address, if unique, as we have some dups out there as we cant manage our network properly... and skip the domain part. But few people even know that is an option.
Sat Mar 06 2021 13:12:50 EST from IGnatius T FoobarIt wants you to log in to Apollo Domain.
(Hey, when *I* learned routing, it was a protocol you needed to be able to handle. Apollo Domain, along with Appletalk, Banyan VINES, IPX, XNS, DECnet, Source Route Bridging, and of course IP. And these younguns think they have it hard because they have to configure IPv4 and IPv6 on the same device.)
domain name.. Only recently could you sign in with your email
address, if unique, as we have some dups out there as we cant manage
*ahem*
It's not your email address, it's your "User Principal Name"
hehe
True, but if we told them that. they would be lost.
And actually we have a few UPNs that are wierdo.. Not real sure why. Not my area anymore.
Wed Mar 10 2021 09:26:30 EST from IGnatius T FoobarPost messagedomain name.. Only recently could you sign in with your email
address, if unique, as we have some dups out there as we cant manage
*ahem*
It's not your email address, it's your "User Principal Name"
hehe
No. Not in this case.
Tue Mar 09 2021 23:00:21 EST from ParanoidDelusionsMaybe it was a brain fart?
"What is the average load *at idle* on our standard desktop/laptop hardware", "so i can compare resource use to what i am seeing"
"I have heard of VMs but i know nothing about them. You will have to talk to the manager over that area about our normal VM resources. And system load will depend on the applications you are running"
Really? Learn to F-ing read.. idiot
Well, I've got the VM up and running off the Easy Install using a copy of my production BBS db - but I had to revert to the Easy Install in order to get it up and running.
I'm stoked, though. This is good forward progress. I'm starting to think about this a little more methodically now.
Now that I have one good VM running a copy of the production BBS on Proxmox... I can just clone it, and do my testing on the clone, right? That way, if I bungle anything, I just delete the cloned VM, and reclone it from the good source VM, and save myself a bunch of effort.
Mon Apr 19 2021 23:07:56 EDT from ParanoidDelusionsAnd once I get this all sorted out, at the very least I'll be able to back up live from my prod Citadel to a VM - and be able to restore if something ever goes wrong. I'd rather have it running in prod on a VM - but that is probably going to take figuring out adding a third USB Nic or WiFi that Proxmox supports to the machine and just disabling the Intel NIC that is onboard.
A full backup will do that for you. No need to clone it. If you are really paranoid, copy it off to a drive or something.
Tue Apr 20 2021 00:58:10 EDT from ParanoidDelusionsWell, I've got the VM up and running off the Easy Install using a copy of my production BBS db - but I had to revert to the Easy Install in order to get it up and running.
I'm stoked, though. This is good forward progress. I'm starting to think about this a little more methodically now.
Now that I have one good VM running a copy of the production BBS on Proxmox... I can just clone it, and do my testing on the clone, right? That way, if I bungle anything, I just delete the cloned VM, and reclone it from the good source VM, and save myself a bunch of effort.
Mon Apr 19 2021 23:07:56 EDT from ParanoidDelusionsAnd once I get this all sorted out, at the very least I'll be able to back up live from my prod Citadel to a VM - and be able to restore if something ever goes wrong. I'd rather have it running in prod on a VM - but that is probably going to take figuring out adding a third USB Nic or WiFi that Proxmox supports to the machine and just disabling the Intel NIC that is onboard.
But, a full backup of the VM would entail a full restore - which is time consuming, right.
Maybe I didn't describe it right.
I've got a production server, my BBS. I've built a VM test server on another physical machine, running Proxmox. I built a VM test server on that machine. I then restored a backup of just Citadel from the production machine to the test machine. I then want to use the VM as a test server... with the intent of doing things that almost certainly will break Citadel, maybe the VM host OS. If I totally bork it - wouldn't it be easier to just have a clone/snapshot on Proxmox than to rebuild/restore a bungled VM? I want to go back to a point-in-time image *before* I messed it up - not *fix* whatever I did.
I mean... I'm coming from a world of Citrix - where the idea of having snapshots is that if something goes wrong with the server - snapshots make it easier to backup *and* restore than traditional backups. Rather than restoring a backup - you just blow out the VM that has gone bad and bring the new image up in its place. Makes new deployments easier too. You snapshot the baseline build - then just bring the clone/snapshot up and make the changes for it to be another system. Does this concept work different in Proxmox than on other VM platforms?
I was thinking about other network logistic things laying awake in bed. Right now my production server is on bare metal, dual homed, a wired NIC going to the ACE router, the WiFi connected to my own internal network (which goes out to the Internet over my residential ISP).
The nature of a VM would mean that if I want to host the production server on Proxmox - the Proxmox server has to have a route to the ACE Cisco server... so this means... bare with me here...
The physical NIC would go to ACE, through the CISCO, and needs to have an IP address from the pool they gave me. The VM would have a virtualized NIC that would also been on the subnet pool assigned by ACE. So, virtualizing Citadel in a Proxmox VM is instantly going to consume two of the 5 IP addresses that ACE assigns. One for the physical NIC, one for the VM.
Now, if I still want the Proxmox interface visible on the internal interface - that would simply entail having a second physical NIC on the VM - hooked up to my internal network switch and assigned an internal network IP address. I wouldn't be able to connect *directly* to the VM through the internal network - I'd instead access the internal IP address of the Proxmox interface, then launch the VM console there, and that would give me physical access to the VM.
The IP Address scheme of this gets somewhat complex once you add a VM host that you want accessible both to the public and internal network. I don't understand exactly how Proxmox handles address translation from physical NICs to VMs and virtual NICs. I'm not there yet... the whole VM is really a test box now. But if I get familiar with it, I think it will be better if I run the actually production BBS in a VM - maybe even with a second Proxmox node and replication from one node to the other.
Right now, all of this is academic. I just have made an image that approximates my production server as a VM on Proxmox on my internal network - and I've managed to backup my prod Citadel and restore it on the VM - and I want to test appimage installs on that server. I don't want to back this server up, everything on it is disposable - but if something goes wrong - I want it to be easy as possible to restore to the point in time before I screwed up. Doing a backup and restore of THAT seems like the difficult way, with a VM?
Tue Apr 20 2021 07:21:25 EDT from Nurb432A full backup will do that for you. No need to clone it. If you are really paranoid, copy it off to a drive or something.
Tue Apr 20 2021 00:58:10 EDT from ParanoidDelusionsWell, I've got the VM up and running off the Easy Install using a copy of my production BBS db - but I had to revert to the Easy Install in order to get it up and running.
I'm stoked, though. This is good forward progress. I'm starting to think about this a little more methodically now.
Now that I have one good VM running a copy of the production BBS on Proxmox... I can just clone it, and do my testing on the clone, right? That way, if I bungle anything, I just delete the cloned VM, and reclone it from the good source VM, and save myself a bunch of effort.
Mon Apr 19 2021 23:07:56 EDT from ParanoidDelusionsAnd once I get this all sorted out, at the very least I'll be able to back up live from my prod Citadel to a VM - and be able to restore if something ever goes wrong. I'd rather have it running in prod on a VM - but that is probably going to take figuring out adding a third USB Nic or WiFi that Proxmox supports to the machine and just disabling the Intel NIC that is onboard.