This is the problem with all of it... and the solution does sound very collectivist, unionist, soilidaritist...
But nobody stands up because they're afraid that alone, no one will stand with them (which is likely) and they'll be destroyed for doing the principled thing.
If we ALL said fuck it to ALL of it... hell... for a week... if we got that organized. This shit would change.
And it *has* happened, it can happen - but mostly, we just put up with the shit and wait for one brave, empowered person to call it for the bullshit it is and change it.
Thu May 27 2021 13:51:36 EDT from Nurb432Doesn't mean you "personally" accept it.
The only other option is to find another job, with their own set of draconian rules and monitoring. Its not like you can really get away from it, in practical terms.
They're paying you. You can end the arrangement at any time. Every job is a tradeoff of benefits: pay vs. quality of life vs. workplace amenities vs. doing work you love vs. etc etc etc etc.
Wed May 26 2021 10:15:20 AM EDT from IGnatius T Foobar
Keeping off the road because bella asked for it. :)
You're a love
"you will return to the office July 6. You will work in the office a minimum of 6 days every 2 weeks ( our pay periods re 2 weeks )"
"There will be no more assigned desks but there will be a variety of hotel spaces to use ( what we call non-assigned desks ) and there will be an app related to reserve a space as well as what resources they have"
Bleh.
Oh, that does suck.
Mostly because it seems like just a "We want to remind you who is your boss..." kind of policy. The whole "no assigned desks" thing also strips you of any agency. "We're also going to use this opportunity to strip you of any dignity you *did* have before in your role as our employee."
Wed Jun 02 2021 17:46:20 EDT from Nurb432"you will return to the office July 6. You will work in the office a minimum of 6 days every 2 weeks ( our pay periods re 2 weeks )"
"There will be no more assigned desks but there will be a variety of hotel spaces to use ( what we call non-assigned desks ) and there will be an app related to reserve a space as well as what resources they have"
Bleh.
My Nephew once brought a lesbian home after closing time. I knew she was a lesbian the minute I saw her. They messed around all night, and my Nephew was so drunk, the next day he reported about how epically bad his performance was to me, while still insisting she wasn't a lesbian.
This is on topic... just bear with me...
So, on top of whiskey dick, he was clumsy and sloppy and just generally terrible. A few hours later, the bartender called my nephew up - told him he should stay away from the bar... that the girlfriend of the girl he had taken home was SUPER pissed... they had broken up and the girl he brought home had decided to try the other team again - mostly to offend her girlfriend.
I said, "congratulations, buddy - you had a lesbian in bed considering switching teams last night, and she woke up this morning and thought to herself...
"*That* is why I'm a lesbian..."
And went right back to her girlfriend..."
That is how I feel about working for a typical corporate office ever again. Like a Lesbian who just gave my nephew a shot at making her switch teams.
Thu Jun 03 2021 10:32:14 EDT from IGnatius T FoobarHoteling was becoming a trend even before the chinavirus drove everyone out of the offices. Much has been written about it, mostly pieces about how it is at once a cost savings and a productivity killer.
"Just a quick update to Cisco OTV" should only be a few seconds of outage ( not 100% about what that means yet, something about "virtualization of layer 2 network across the enterprise" )
So it goes down hard, partially, and is taking pretty much everything else with it. Network guys are with CISCO on the phone so its bad.. 100s of VMs failing-over to off-site where they cant talk back to the home data center, VPN barely working, storage arrays are inaccessible. Phone hotlines ( like for police.. ) are toast.. what freaking mess.
And i need to do something this afternoon.
"issue is not resolved yet, we are still waiting on a engineer from cisco as they are dealing with multiple OTV outages at many customer sites".
LOL
So they pushed an update that hoses multiple mission critical site locations?
That bites. I *hate* days like this in enterprise IT. It literally gives me that "sick in my stomach" feeling just thinking about anyone *else* having to go through that.
Thankfully now that i'm 'just' an app guy. Infrastructure issues are not my problem. I get to sit back and watch the show.
Tho i did have some work to do, that was impossible until evening.
Mon Jun 07 2021 01:22:33 AM EDT from ParanoidDelusionsSo they pushed an update that hoses multiple mission critical site locations?
That bites. I *hate* days like this in enterprise IT. It literally gives me that "sick in my stomach" feeling just thinking about anyone *else* having to go through that.
Intel didn't draw distinction between app and server/infrastructure. System Engineer meant you were responsible for the server, attached hardware (DAS, NAS, SAN, whatever) and the apps that it served. We had networking - and development - but development worked in test labs and then we moved the finished apps up through dev, test, day1, preprod and production with less assistance from Development at each step.
Which I guess was a smart move. It kept the developers out of production environments and tried to assure that production apps were so polished that someone without development skills could land them.
Mon Jun 07 2021 07:51:39 EDT from Nurb432Thankfully now that i'm 'just' an app guy. Infrastructure issues are not my problem. I get to sit back and watch the show.
Tho i did have some work to do, that was impossible until evening.
Mon Jun 07 2021 01:22:33 AM EDT from ParanoidDelusionsSo they pushed an update that hoses multiple mission critical site locations?
That bites. I *hate* days like this in enterprise IT. It literally gives me that "sick in my stomach" feeling just thinking about anyone *else* having to go through that.
We used to be more like that. but as things got more complex over the years ( both in products and our environment ), and our user base grew ( making the risk higher to be on the nightly news if we screwed up ), it made more sense to separate out duties across architecture boundaries.
Sure it was nice to be able to get into the VM host or the server OS that ran my app when i wanted to, but its also nice to just point my finger when it goes up in smoke down the line a bit..
I think our core user base is around 40k. But its in the millions if you want to include all non-core interactions with our 'customers'. Sure there are larger environments out there, but its not a 'seat of pants' level shop like we used to be.
Mon Jun 07 2021 10:29:15 AM EDT from ParanoidDelusionsIntel didn't draw distinction between app and server/infrastructure. System Engineer meant you were responsible for the server, attached hardware (DAS, NAS, SAN, whatever) and the apps that it served. We had networking - and development - but development worked in test labs and then we moved the finished apps up through dev, test, day1, preprod and production with less assistance from Development at each step.
Which I guess was a smart move. It kept the developers out of production environments and tried to assure that production apps were so polished that someone without development skills could land them.
It actually sounds a lot like my first group at Intel. At one point I was the sole owner for a poorly maintained server that simply did EDI translations for all of Intel's eBuisness. It handled billions of dollars in Intel transactions every year - and if it went down, that business came to a halt. We were rebooting it at least once a week when the translator queue stalled when I first took over the environment. On a couple of top end Xeons with internal RAID and shared, clustered DAS storage - that reboot took about 20 to 30 minutes. Figured out what process it was that was stalling, and was able to just kill and restart it for a while, then organized a tiger team with the product vendor (Sterling Gentran) and helped them discover a legacy issue in the translator that was corrected. We went from somewhere around 87% uptime to being on track for better than five nines in the space of 3 years. Because of Intel's policy of "continuous improvement," I was moved on to a new project to "keep me challenged," and was replaced on that project by an engineer widely regarded as incompetent among the IT staff. This resulted in me having to learn, own and be responsible for a completely new environment while "cross training" the replacement for the EDI servers. That translated in practical terms to, "owning both environments but only getting credit for the new one, but getting blamed it anything went wrong with the old one."
This is about the time that my satisfaction with working at Intel started to tank.
Mon Jun 07 2021 12:03:13 EDT from Nurb432We used to be more like that. but as things got more complex over the years ( both in products and our environment ), and our user base grew ( making the risk higher to be on the nightly news if we screwed up ), it made more sense to separate out duties across architecture boundaries.
Sure it was nice to be able to get into the VM host or the server OS that ran my app when i wanted to, but its also nice to just point my finger when it goes up in smoke down the line a bit..
I think our core user base is around 40k. But its in the millions if you want to include all non-core interactions with our 'customers'. Sure there are larger environments out there, but its not a 'seat of pants' level shop like we used to be.
Mon Jun 07 2021 10:29:15 AM EDT from ParanoidDelusionsIntel didn't draw distinction between app and server/infrastructure. System Engineer meant you were responsible for the server, attached hardware (DAS, NAS, SAN, whatever) and the apps that it served. We had networking - and development - but development worked in test labs and then we moved the finished apps up through dev, test, day1, preprod and production with less assistance from Development at each step.
Which I guess was a smart move. It kept the developers out of production environments and tried to assure that production apps were so polished that someone without development skills could land them.
Heh. I've had plenty of "those" sleepless nights. Now I'm in R&D and they don't bother me anymore :)
The last plant i worked for ( which i really miss. . long story of why i'm not there. A 'slash and burn' event ) i was the entire IT department ( we only had about 300 employees, were a aluminum casting faculty ) Came in and it was a mess. Dude before me was a moron and had been gone a month before. ( "I think he left a note for the new guy when he left.. there is the door where the computer stuff is, and here is a key... good luck, that is about all we know" Sooo first thing on the agenda was to make a backup. There were not any!
I started in spring and over that summer redid everything one by one. Servers, workstations, printers. Even the network. ( both blue hose for the shop and Ethernet/fiber for the office area. but kept them separate, out of paranoia of getting hacked and crashing production, or killing someone. Even tho it was 25 years ago and that stuff really didnt happen.
Later that fall, the CTO called me in to his office ( i reported up thru him ) and commented "you know, ever since you got here, our stuff 'just works' i had forgot what that is like.. thank you"
"Something inappropriate that would get me in trouble if I said it is included herein by reference."
-- me, just now, on a meeting call
I like to say when I leave a place, I don't burn my bridges - I napalm them. Or as my AMA racing Intel buddy who reminds me of Ig likes to say it before he punches it into a corner at triple digits, "in for a penny, in for a pound..."
I knew a couple of guys who worked at shops in Ohio that were like this. Where one or two people were supporting *everything* in IT for like... 300 or more employees.
I was at a place where I started with two guys, they added me, I added a third guy, and then 4 more and each was specialized. The difference that made was tremendous. It is an industry where you can have one very cross disciplined guy - but things work great if you have a specialized guy/team for each department that are cross disciplined enough to provide backup support.
But - yeah - if you're pretty good, at those one man shops - you generally get treated like you're virtually irreplaceable.
Mon Jun 07 2021 12:45:48 EDT from Nurb432The last plant i worked for ( which i really miss. . long story of why i'm not there. A 'slash and burn' event ) i was the entire IT department ( we only had about 300 employees, were a aluminum casting faculty ) Came in and it was a mess. Dude before me was a moron and had been gone a month before. ( "I think he left a note for the new guy when he left.. there is the door where the computer stuff is, and here is a key... good luck, that is about all we know" Sooo first thing on the agenda was to make a backup. There were not any!
I started in spring and over that summer redid everything one by one. Servers, workstations, printers. Even the network. ( both blue hose for the shop and Ethernet/fiber for the office area. but kept them separate, out of paranoia of getting hacked and crashing production, or killing someone. Even tho it was 25 years ago and that stuff really didnt happen.
Later that fall, the CTO called me in to his office ( i reported up thru him ) and commented "you know, ever since you got here, our stuff 'just works' i had forgot what that is like.. thank you"
Oh, i didnt do the burn ..
Short version is we got a new CFO and his 'goal' was to bankrupt the company but not make it obvious. Among other causalities, i was one. Another was our manufacturing maintenance people/supplies. "they are not doing anything, machines are running, get rid of them" ( this was during our annual shut down/product change over.... )
While it sounds silly to an outsider why someone would do this, we were a a joint venture between an American corporation and Japanese. They brought the tech, US brought the business. They had 51%. The ONLY reason the US company did it was to get a hold of the tech. After a few years of decent business the goal was to run it into the ground just enough so the Japanese woudl bail, leaving behind the tech. They are far more committed than most people think and it took a total collapse to get them gone. But it went too deep, and ended up being bought by a investment firm.
"look at me i saved all this cash... " as the place went under 6 months after he left. Could not recover, by design. He had been there less than 6 months.
That was also were i learned a new term: "containment". I been in the automotive manufacturing business over a decade by then but never heard that term. Never want to hear it again. Most companies dont survive it. We did, somehow. ( before new CFO )