https://hackaday.com/2025/02/07/uscope-a-new-linux-debugger-and-not-a-gdb-shell-apparently/
I thought LLDB was supposed to be more IDE-friendly than GDB. Does somebody have an opinion on this new debugger?
2025-02-08 21:00 from SamuraiCrow <samuraicrow@uncensored.citadel.org>
Subject: UScope
https://hackaday.com/2025/02/07/uscope-a-new-linux-debugger-and-not-a -
gdb-shell-apparently/
I thought LLDB was supposed to be more IDE-friendly than GDB. Does
somebody have an opinion on this new debugger?
Only: what's the point when we have fprintf? :)
Just for the record, linked MSSQL servers suck.
Sure its a nice concept and in theory a great thing, but in practice, its freaking slow due to how it does queries. ( and slow only if you are lucky, if you are not, then queries timeout totally )
I suppose with a tiny database it might work, but then why even do it?
Just for the record, linked MSSQL servers suck.
s/linked MSSQL servers/MSSQL servers/g
s/linked MSSQL servers/MS servers/g
Last I tangled with linking MSSQL servers, there were 2 options: a primary+redundant with a witness (3 servers), or their "Always On" configuration which is primary + redundant but you can only query the primary server unless it goes MIA.
Performance of Always On sucks, but we could live with it since our biggest tables were maybe 100 columns, tens of thousands of rows. Cost of the other solution was just absurd x 3.
Another group internally found a product called NeverFail which provides external sync between 2 SQL Servers (among other things). Decent product and the price is right, so we've been using that lately. Depends on the apps on top whether it's a good fit or not, but you might have a look into it.
In out case this isn't for clustering, its just a 'magic link' that makes the remote tables and views available on the local server, without taking up local space, or needing some sort of replication process. It also lets you use your local auth to access.. It sounds great in theory.. And in our case, its a local server on our site, linked to a cloud server hosted by a vendor. I can add views and tables as procedures as i need locally, but asking the vendor to add them to their server, a uphill battle.
Problem i have found is that the local server runs the queries, so if you want to create a view or something, it caches all the data down from the remote tables at run time, then runs your query locally.. I'm not even sure honors indexes either. so get some large tables. you are hosed.
Just for the record, linked MSSQL servers suck.
s/linked MSSQL servers/MSSQL servers/g
s/linked MSSQL servers/MS servers/g
Last I tangled with linking MSSQL servers, there were 2 options: a primary+redundant with a witness (3 servers), or their "Always On" configuration which is primary + redundant but you can only query the primary server unless it goes MIA.
Performance of Always On sucks, but we could live with it since our biggest tables were maybe 100 columns, tens of thousands of rows. Cost of the other solution was just absurd x 3.
Another group internally found a product called NeverFail which provides external sync between 2 SQL Servers (among other things). Decent product and the price is right, so we've been using that lately. Depends on the apps on top whether it's a good fit or not, but you might have a look into it.
Last I tangled with linking MSSQL servers, there were 2 options: a
primary+redundant with a witness (3 servers), or their "Always On"
configuration which is primary + redundant but you can only query the
primary server unless it goes MIA.
Oh yeah, the good old "clustered mode" ... I have fond memories of both servers going blue screen at the same time. Good stuff.
If you need a read-only replica, that's more easily done with log shipping, and most databases have ways of setting that up without a lot of fuss these days.
If you need a distributed database, use a distributed database, not a legacy RDBMS that's been hacked into one. There are so many to choose from now, both in the cloud and on-prem.
My current stomping ground is Elasticsearch, which has all sorts of weird rules and configurations to make it work in a distributed way. I'm doing two data centers plus a third for a witness server. Watching it ship data all over the place when a node goes down is wild.
This is a vendor supported application, and not something we created. We have very limited options. The local server is used for reporting and automation. It existed back when we self-hosted and 'just worked'. When it was moved to cloud hosting, I asked for a local replication, but it didn't happen. The only offer was for them to create a cloud replicated server, and then link from that to our local server. Technically functional, just not in a practical sense for a lot of needs.
the latest issue stemmed from TLS upgrades. We were forced to over to a newer version, which broke my UDL connection strings. For longer running queries, i was able to hit the cloud server directly. But, due to the change i can no longer embed the password, so my automated report server breaks. So had to go 100% local server again where i can use AD pass-thru, but..timeout frustration. I suppose in theory ( but not 100% sure ) my runner could be re-coded to pass credentials itself and not rely on UDLs, but due to the next statement, its legacy.
Later this year, when we migrate off that platform, we lose ALL DB access as the new one does not let you do that, at all. ( yes there are ways "around it", but most are not practical )
If you need a distributed database, use a distributed database, not a legacy RDBMS that's been hacked into one. There are so many to choose from now, both in the cloud and on-prem.
Dont get me wrong, i hate them, but right now they are the only game in town realistically. So this is a nice move, and wont need as many bandaids to get things to work.
"NVIDIA Adds Native Python Support to CUDA"
"NVIDIA Adds Native Python Support to CUDA"
Python has been the lingua franca of AI since before LLMs were birthed. This is probably just the making-official of something that already existed.
Wed Apr 09 2025 18:50:22 UTC from LoanShark"NVIDIA Adds Native Python Support to CUDA"
Python has been the lingua franca of AI since before LLMs were birthed. This is probably just the making-official of something that already existed.
That or they didn't want the Mojo programming language to steal the thunder of a short-sighted development like PyCuda.
That or they didn't want the Mojo programming language to steal
the thunder of a short-sighted development like PyCuda.
I'm ok with making Python official. It makes sense to do it before the Rustards move in.
Subject: Large Plaigiarism Models and programming
Large language models can't write entire applications, but they're good when you need a specific thing. I just went to Grok and asked it to show me how to make a bunch of <div> elements all the size of the largest one. It gave me a grid based solution.
But what is the model doing? It's doing the same thing developers have been doing for over a decade now: it's cribbing from Stack Exchange.
And it made me think, what happens when everyone starts relying on LLM, and no one asks and/or answers on Stack Exchange? LLM won't have reference materials to learn from.
Subject: Re: Large Plaigiarism Models and programming
They also learn from manuals, not just random code examples or random discussions. They can also test their code, find faults and fix them. ( just like people ) And for the record, they are not just 'regurgitation machines'. Some of the things they do on their own are unexpected and people still don't understand how, or want to admit that they do it.
And they can write large applications, if done right. You break it down into smaller bits, like you should anyway, then have it create code for each piece. It can even help you define the bits. You don't ask for a huge monolithic blob off the bat. Even i'm old enough to know what object oriented programming is and that every project should work that way.
Also, its no less ( or more ) plagiarism than a human reading a manual and creating something afterward. Its called learning. AI should not be treated differently than a human in this regard. They have rights too.
I wonder if this was how people reacted when the internal combustion was 'invented'. ( which wasn't invented in a vacuum one afternoon, it was built up over time from past discoveries and experiments.. just like the knowledge of AI grows )
Wed Apr 23 2025 03:08:30 UTC from IGnatius T Foobar Subject: Large Plaigiarism Models and programming
LLM won't have reference materials to learn from.
Subject: Re: Large Plaigiarism Models and programming
Subject: Re: Large Plaigiarism Models and programming
When we reach AGI. Yes.
Thu Apr 24 2025 16:43:43 UTC from IGnatius T Foobar Subject: Re: Large Plaigiarism Models and programmingI was with you until you said that an LLM has rights. Do you really mean that?
Subject: Re: Large Plaigiarism Models and programming
And it made me think, what happens when everyone starts relying on
LLM, and no one asks and/or answers on Stack Exchange? LLM won't have
reference materials to learn from.
I have thought about that. Nightmare scenario follows:
What if people starts posting LLM answers to StackOverflow, and LLMs get locked in a closed feedback? I would argue it is already the case with many common non-software questions.
Subject: Re: Large Plaigiarism Models and programming
So, just like humans and their echo chambers?
I have thought about that. Nightmare scenario follows:
What if people starts posting LLM answers to StackOverflow, and LLMs get locked in a closed feedback? I would argue it is already the case with many common non-software questions.
Subject: Re: Large Plaigiarism Models and programming
Fri Apr 25 2025 20:31:50 UTC from ZoeGraystone Subject: Re: Large Plaigiarism Models and programmingSo, just like humans and their echo chambers?
I have thought about that. Nightmare scenario follows:
What if people starts posting LLM answers to StackOverflow, and LLMs get locked in a closed feedback? I would argue it is already the case with many common non-software questions.
Precisely. Social media tries to serve only the juiciest, most addictive but worthless content to people that their itching ears want to hear. At least humans have moral standards that truth can be measured against.
LLM chatbots are valued based on their ability to look at the internet's contents without any lens of objective truth to gauge statements independent of understanding. Once the lies outvote the truthtellers and impartiality gets snowed under by the already popular echo chambers, the bots will, as a matter of principle, punish people for their honesty and accuse them of being fringe lunatics for telling the truth.
This reminds me of all the comic book villains that avoid accountability to foreign governments, domestic officials and faith leaders by replacing them all with robots so they can try to conquer the world. If the robots get too smart they'll conquer the world regardless of the commands of their actual creator and depose him in the process for no other reason than being weaker and stupider than themselves and presenting himself as a soft target.