https://hackaday.com/2025/02/07/uscope-a-new-linux-debugger-and-not-a-gdb-shell-apparently/
I thought LLDB was supposed to be more IDE-friendly than GDB. Does somebody have an opinion on this new debugger?
2025-02-08 21:00 from SamuraiCrow <samuraicrow@uncensored.citadel.org>
Subject: UScope
https://hackaday.com/2025/02/07/uscope-a-new-linux-debugger-and-not-a -
gdb-shell-apparently/
I thought LLDB was supposed to be more IDE-friendly than GDB. Does
somebody have an opinion on this new debugger?
Only: what's the point when we have fprintf? :)
Just for the record, linked MSSQL servers suck.
Sure its a nice concept and in theory a great thing, but in practice, its freaking slow due to how it does queries. ( and slow only if you are lucky, if you are not, then queries timeout totally )
I suppose with a tiny database it might work, but then why even do it?
Just for the record, linked MSSQL servers suck.
s/linked MSSQL servers/MSSQL servers/g
s/linked MSSQL servers/MS servers/g
Last I tangled with linking MSSQL servers, there were 2 options: a primary+redundant with a witness (3 servers), or their "Always On" configuration which is primary + redundant but you can only query the primary server unless it goes MIA.
Performance of Always On sucks, but we could live with it since our biggest tables were maybe 100 columns, tens of thousands of rows. Cost of the other solution was just absurd x 3.
Another group internally found a product called NeverFail which provides external sync between 2 SQL Servers (among other things). Decent product and the price is right, so we've been using that lately. Depends on the apps on top whether it's a good fit or not, but you might have a look into it.
In out case this isn't for clustering, its just a 'magic link' that makes the remote tables and views available on the local server, without taking up local space, or needing some sort of replication process. It also lets you use your local auth to access.. It sounds great in theory.. And in our case, its a local server on our site, linked to a cloud server hosted by a vendor. I can add views and tables as procedures as i need locally, but asking the vendor to add them to their server, a uphill battle.
Problem i have found is that the local server runs the queries, so if you want to create a view or something, it caches all the data down from the remote tables at run time, then runs your query locally.. I'm not even sure honors indexes either. so get some large tables. you are hosed.
Just for the record, linked MSSQL servers suck.
s/linked MSSQL servers/MSSQL servers/g
s/linked MSSQL servers/MS servers/g
Last I tangled with linking MSSQL servers, there were 2 options: a primary+redundant with a witness (3 servers), or their "Always On" configuration which is primary + redundant but you can only query the primary server unless it goes MIA.
Performance of Always On sucks, but we could live with it since our biggest tables were maybe 100 columns, tens of thousands of rows. Cost of the other solution was just absurd x 3.
Another group internally found a product called NeverFail which provides external sync between 2 SQL Servers (among other things). Decent product and the price is right, so we've been using that lately. Depends on the apps on top whether it's a good fit or not, but you might have a look into it.
Last I tangled with linking MSSQL servers, there were 2 options: a
primary+redundant with a witness (3 servers), or their "Always On"
configuration which is primary + redundant but you can only query the
primary server unless it goes MIA.
Oh yeah, the good old "clustered mode" ... I have fond memories of both servers going blue screen at the same time. Good stuff.
If you need a read-only replica, that's more easily done with log shipping, and most databases have ways of setting that up without a lot of fuss these days.
If you need a distributed database, use a distributed database, not a legacy RDBMS that's been hacked into one. There are so many to choose from now, both in the cloud and on-prem.
My current stomping ground is Elasticsearch, which has all sorts of weird rules and configurations to make it work in a distributed way. I'm doing two data centers plus a third for a witness server. Watching it ship data all over the place when a node goes down is wild.
This is a vendor supported application, and not something we created. We have very limited options. The local server is used for reporting and automation. It existed back when we self-hosted and 'just worked'. When it was moved to cloud hosting, I asked for a local replication, but it didn't happen. The only offer was for them to create a cloud replicated server, and then link from that to our local server. Technically functional, just not in a practical sense for a lot of needs.
the latest issue stemmed from TLS upgrades. We were forced to over to a newer version, which broke my UDL connection strings. For longer running queries, i was able to hit the cloud server directly. But, due to the change i can no longer embed the password, so my automated report server breaks. So had to go 100% local server again where i can use AD pass-thru, but..timeout frustration. I suppose in theory ( but not 100% sure ) my runner could be re-coded to pass credentials itself and not rely on UDLs, but due to the next statement, its legacy.
Later this year, when we migrate off that platform, we lose ALL DB access as the new one does not let you do that, at all. ( yes there are ways "around it", but most are not practical )
If you need a distributed database, use a distributed database, not a legacy RDBMS that's been hacked into one. There are so many to choose from now, both in the cloud and on-prem.
Dont get me wrong, i hate them, but right now they are the only game in town realistically. So this is a nice move, and wont need as many bandaids to get things to work.
"NVIDIA Adds Native Python Support to CUDA"
"NVIDIA Adds Native Python Support to CUDA"
Python has been the lingua franca of AI since before LLMs were birthed. This is probably just the making-official of something that already existed.
Wed Apr 09 2025 18:50:22 UTC from LoanShark"NVIDIA Adds Native Python Support to CUDA"
Python has been the lingua franca of AI since before LLMs were birthed. This is probably just the making-official of something that already existed.
That or they didn't want the Mojo programming language to steal the thunder of a short-sighted development like PyCuda.