Right, budget still comes into play. Extra resources are still not free. Just dynamic, unless you were running them as VMs locally, as it was dynamic then too.
In theory other than a bit more electric, our cost really didnt change to up your resources onsite since we bought the servers already., but we still charged the BU more since that was resources we couldn't allocate to someone else.
Mon Feb 22 2021 08:32:54 EST from DutchessMikeI agree that cloud computing has made it easier to size hardware to workload, but it only kicked the hardware problem down the road. In your example case, he could still be pissed even if the systems he manages are running on slow hardware because management won't approve the revised budget for the upgrade. Amazon makes billions nickel-and-diming folks to death. :)
Sat Feb 20 2021 12:21:13 EST from LoanShark2021-02-20 09:12 from Nurb432
Our DB servers are pegged . All.. the.. time..
I've worked at shops like that. In one case, before cloud hosting became a thing, one of the sysadmins was flipping out and angry at management because they wouldn't approve a hardware upgrade.
These days though, it's so easy. Log into the Amazon RDS admin console and change your instance type from db.t3.medium to db.t3.large or whatever. You'll see several seconds to maybe a minute of downtime for migration to occur, and then, problem solved.
Where I work now, we have the lightest DB utilization in the world.
2021-02-23 15:27 from Nurb432
Right, budget still comes into play. Extra resources are still not
free. Just dynamic, unless you were running them as VMs locally, as
it was dynamic then too.
Dynamic can be kind of a big deal though, compared to back in the day when you had a physical server with 1 or 2 sockets of 4-core processors that you were thinking about upgrading, and it had to be a 2-week to 2-month project because you had to get the hardware shipped to your site, and then benchmark it, and then develop and implement minimal-downtime upgrade procedures, and thefigure out backup and disaster-recovery and contingency plans, and basically all of this is now something you can now just point-and-click on a console.
Oh, i agree, just that i figured most companies now have gone the VM route and are not still doing dedicated physical servers. Cost is no longer a barrier to do it.
Even if you dont 'need' it to be virtual and you want to run a resource at 1:1 is still an advantage for migrations, backups and such. Overhead is so slight its well worth it.
I suppose we can thank Oracle for helping make open source databases so popular, because that's the kind of nonsense that drove people away.
Yeah, now that a typical 2-socket blade is starting have like 128C/256T, you just don't need your typical database workload to be on bare metal anymore.
Last time i was in that world as a VMware admin, Oracle charged for every core in the the entire farm, even if the workload is pinned to a # of cores. Not just 'the server'. And my understanding was even if you pinned it to specific servers, they didnt care. You pay for your entire farm, all of it. The only way around it was to isolate them into their own farm, limiting their value. ( and still pay for the entire farm, just not as much as your 'regular' farm ).
I dont remember how they billed for their apps however, but i suspect it was similar. I do know many companies are getting off their app servers and going to just tomcat, as they are being priced out of their market due to Oracle costs.
Even Microsoft wasn't that bad.
Thu Feb 25 2021 09:17:26 EST from IGnatius T FoobarDatabase on VM was weird for a long time. Part of it was that some people were hesitant to put their most resource intensive workloads on a virtual machine. Part of it was Oracle, who charged you based on the number of sockets installed in the computer, regardless of whether they were all attached to the VM in which you ran their software.
I suppose we can thank Oracle for helping make open source databases so popular, because that's the kind of nonsense that drove people away.
you pinned it to specific servers, they didnt care. You pay for your
entire farm, all of it. The only way around it was to isolate them
into their own farm, limiting their value. ( and still pay for the
Right. In our data centers we used to have a lot of customers who ended up isolating that workload onto a specific server because they didn't want to deal with the license problem. But we also had just as many who said "screw it, the number of cores the VM sees is the number of cores we're licensing" and that was that.
I wonder how long it took for Oracle to figure out how many customers they lost because people were running databases in multitenant clouds and didn't feel like shoveling out trillions of dollars in licensing fees.
Microsoft SQL Server had an equally boneheaded license at one time. It only ran on as many sockets as you paid the license for, but it didn't care how many cores were in the socket. So if you ran it on VMware you just set up your virtual CPUs as one socket with however many cores you wanted, and even if they were physically in different sockets, the software saw them as one.
No wonder MariaDB is more popular than ever.
Didn't even think about public data centers. That would be even worse.
In theory, you put a Oracle server on Azure, or AWS, you are talking a bit of money there.
Thu Feb 25 2021 18:56:59 EST from IGnatius T Foobaryou pinned it to specific servers, they didnt care. You pay for your
entire farm, all of it. The only way around it was to isolate them
into their own farm, limiting their value. ( and still pay for the
Right. In our data centers we used to have a lot of customers who ended up isolating that workload onto a specific server because they didn't want to deal with the license problem. But we also had just as many who said "screw it, the number of cores the VM sees is the number of cores we're licensing" and that was that.
I wonder how long it took for Oracle to figure out how many customers they lost because people were running databases in multitenant clouds and didn't feel like shoveling out trillions of dollars in licensing fees.
Microsoft SQL Server had an equally boneheaded license at one time. It only ran on as many sockets as you paid the license for, but it didn't care how many cores were in the socket. So if you ran it on VMware you just set up your virtual CPUs as one socket with however many cores you wanted, and even if they were physically in different sockets, the software saw them as one.
No wonder MariaDB is more popular than ever.
2021-02-25 18:34 from Nurb432
Last time i was in that world as a VMware admin, Oracle charged for
every core in the the entire farm, even if the workload is pinned to
a # of cores. Not just 'the server'. And my understanding was even if
Yeah, Oracle is notorious for a particular negotiation style, perhaps best described as:
(1) What are your annual profits?
(2) Send them to us.
Not unlike the IRS, but I digress.
It's a different world these days. I haven't worked at an Oracle shop since 2011. I've been on MySQL and postgresql. The Amazon RDS model has perhaps made things difficult for Oracle's negotiation style: you can get Oracle on RDS, and you simply pay by the instance-hour according to the standard RDS pricing model for the instance type, plus an Oracle license fee rider, also by instance-hour, which is based on the instance-type, probably according to the number of VCPUs.
In other words, Amazon is collectively bargaining on your behalf. Of course it's still more expensive than the open source databases on the same hardware, but ultimately I don't think this is the model Oracle would have preferred if they had a choice, nor do I think the long-term industry trends bode well for Oracle.
servers and going to just tomcat, as they are being priced out of
their market due to Oracle costs.
yes tomcat - that's a done deal. Or Spring Boot is also becoming popular these days, and it embeds Jetty or something like that, in a package you can just launch as a command-line JAR. J2EE is thoroughly a dead letter.
The app side of the house is what keeps a lot of people on their product. ( Apex for example ). While not totally trivial, moving to another DB for data storage is not a huge deal. Its the app stuff that takes tons of time/money/risk/
Good to hear that licensing has become a bit less draconian. The last thing i had to do with oracle was be local support for a vendor supplied app, and reporting, via crystal ( which is part of my wheelhouse ). So things like licenses were not my problem. We have a DB team a App server team and then app guys like me. It wasn't my main gig, just on the side to help out since i'm about the only guy who understands the entire picture and have done it all so i could interface between the other groups in a useful way. I still had my 'day job' app to support.
App moved to cloud mid-last year, so i'm out.. and i had a party afterward. it was (is) a pos.
Fri Feb 26 2021 18:12:08 EST from LoanSharkservers and going to just tomcat, as they are being priced out of
their market due to Oracle costs.
yes tomcat - that's a done deal. Or Spring Boot is also becoming popular these days, and it embeds Jetty or something like that, in a package you can just launch as a command-line JAR. J2EE is thoroughly a dead letter.
product. ( Apex for example ). While not totally trivial, moving
to another DB for data storage is not a huge deal. Its the app stuff
Modern app frameworks have given a lot of thought to how to make managing your DB schema more rigorous. It's mostly a good thing, but the frameworks work like this: every time you want to modify the schema you generate a migration script that can run in either direction: forward or reverse (undo your schema change in case you need to rollback your code.)
Therefore, for an app built in such a framework that has been around for a few years, the process to create the schema would not be just a simple list of CREATE TABLE commands, instead it would be a years-long collection of incremental changes to each table DDL. Imagine the annoyance factor of trying to convert all those scripts from postgresql to mysql! You wouldn't. I guess you would find a way to do a dump of the current schema. I'm not sure how the end-result would fit into the framework without some kind of hacky "flag day" operation, and there would inevitably be a lot of other little vendor-specific places.
Not disagreeing, but I've never even worked on a team that saw fit to bother with an Oracle-to-Postgres migration, which would be the one that would make the most sense.
The app side of the house is what keeps a lot of people on their
product. ( Apex for example ). While not totally trivial, moving
to another DB for data storage is not a huge deal. Its the app stuff
I think Oracle knows that. They've bought up a lot of different software houses (including one I worked a couple of summers at back in high school) and they probably envision their medium to long term strategy as being a SaaS powerhouse.
Being a company everyone loves to hate doesn't really help their cause, though.
They probably know that databases are a commodity at this point, and being the "owner" of Java probably isn't bringing them the zillions they were hoping for.
I spent some time this week mucking about with UEFI and actually making an effort to understand it. I had a general idea how it worked, but my usual procedure was "if there's a menu option to select a UEFI install, try it, then go back to legacy mode if it doesn't work." But I'm trying to boot my work laptop into a not-work OS image using an external drive, and changing that machine to BIOS is not an option.
The idea that there needs to be a dedicated EFI partition is intriguing.
It holds a lot of similarity to the MS-DOS partition you needed to boot a Netware server, or the /boot partition often needed on BIOS-booted Linux machines.
Indeed, I see that on a proper install, it's mounted as /boot/efi during normal runtime.
What I *didn't* know, and no one seems to go out of their way to make it clear, is that you must have the disk partitioned as GPT in order for anything to work. That seems to make a big difference.
I'm also fascinated by the fact that UEFI has support for FAT32 filesystem built in to the ROM, and therefore needs no master boot record on the disk.
It just opens the filesystem and looks for bootable stuff in a pre-designated place, and runs it.
Seems like you could actually build an entire operating system, one that perhaps has the same level of functionality as MS-DOS, without ever leaving the UEFI environment.
If the laptop has CSM, you should be able to put it into UEFI+CSM dual boot mode. This is not preferred, because CSM is less secure, but it's an option.
So does U-Boot ( tho im not 100% its mandatory, seems that everone does it that way )
Thu Mar 04 2021 16:39:37 EST from IGnatius T Foobar
The idea that there needs to be a dedicated EFI partition is intriguing.
That's disgusting. The kernel and initrd should just sit in the EFI System Partition and run directly. That's the *obvious* way to do it.
I think its a bit more than that, but it is a pain in the neck no matter what. I still struggle setting things up from scratch, and getting all the device stuff right.
Mon Mar 08 2021 14:40:07 EST from IGnatius T FoobarUgh. Is U-Boot the cheap and sleazy hack that I think it is? Just a shim in the UEFI chain that chain loads GRUB?
That's disgusting. The kernel and initrd should just sit in the EFI System Partition and run directly. That's the *obvious* way to do it.
Run Install.exe
"click YES" to every question 30 times.
Let the install run and reboot 3 times.
Accept all the conditions of the spyware built into the OS.
Done.
Painless.
Mon Mar 08 2021 15:14:33 EST from Nurb432I think its a bit more than that, but it is a pain in the neck no matter what. I still struggle setting things up from scratch, and getting all the device stuff right.
Mon Mar 08 2021 14:40:07 EST from IGnatius T FoobarUgh. Is U-Boot the cheap and sleazy hack that I think it is? Just a shim in the UEFI chain that chain loads GRUB?
That's disgusting. The kernel and initrd should just sit in the EFI System Partition and run directly. That's the *obvious* way to do it.
Setting up uboot by scratch, isn't like following the bouncing ball. I wish it was. You have to grab several pieces of things from various sources, put them in the right order, and in the right place And hope you got the right bits for your board or you get the black light of death..
https://wiki.st.com/stm32mpu/wiki/How_to_configure_U-Boot_for_your_board