Language:

en_US

switch to room list switch to menu My folders
Go to page: First ... 26 27 28 29 [30] 31 32 33 34 ... Last
[#] Tue Apr 10 2018 11:47:05 UTC from fleeb <>

[Reply] [ReplyQuoted] [Headers] [Print]


When I create a service, I assemble an event loop.

If for no other reason than because either messages from the service control manager (if Windows) or from signals (if Linux) need to be managed in a way that makes sense.

Signalling to stop, for example, ought to provide for performing a clean shutdown, rather than just crashing away quietly.

Also, if you're running someone as a daemon or service, it implies 'events' to me anyway. You're waiting around for some kind of information to arrive in some fashion, then reacting to it in some way. Events.

But under many situations, it doesn't make sense to go with an event loop if the executable is just meant to perform a task and get out of the way.
'uniq' probably doesn't require events.

[#] Tue Apr 10 2018 15:07:59 UTC from kc5tja

[Reply] [ReplyQuoted] [Headers] [Print]

Amiga had a great balance of event-driven and multithreaded, and it worked out fantastic. As for "moving the problem somewhere else," you are of course right, BUT, consider that there are always advantages to certain locations versus others. Consider, if you can basically "automate" resource locking into a message transaction, then you can formally prove the correctness of messaging, and never have to worry about formally proving the combinatorial explosion of possibilities that raw access to locking enables. You can *never* deadlock (due to resource contention, at least) with message passing, because you never access more than two locks at a time, and always in a consistent order.

Of course, your problem has to be amenable to decomposition into message-oriented architectures. Thankfully, 90% of the time, they can be with minimal hassle.
I respect that, maybe, 10% of the time, you won't find it as convenient to express your problem. Personally, I've never run into a problem where it couldn't better be expressed as an actor of some kind. Event loops and state machines are your friends. Embrace them. They really do make life easier, as long as you don't drink the Node.js kool-aid.

[#] Tue Apr 10 2018 15:47:26 UTC from fleeb <>

[Reply] [ReplyQuoted] [Headers] [Print]


Heh... I'm having to drink some of that kool-aid now for a GUI issue (Electron).

But, I hope to keep it limited to just the GUI.

[#] Tue Apr 10 2018 16:09:55 UTC from LoanShark <>

[Reply] [ReplyQuoted] [Headers] [Print]


In my experience, you can deadlock in a message-driven environment basically by waiting indefinitely for a message that ain't never gonna arrive.

[#] Wed Apr 11 2018 02:18:05 UTC from kc5tja

[Reply] [ReplyQuoted] [Headers] [Print]

That's more a control flow bug and not a race condition. A dead-lock by definition is a race, where thread A locks resources R1 and R2 in that order, and thread B locks R2 and R1 in that order. Or, something provably equivalent to this condition. With message passing, you can use a normal debugger and single-step through code to find the faulty logic. Not so much with dead-locking code, especially if it's intermittent.

[#] Wed Apr 11 2018 20:00:53 UTC from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

Fuck that noise. Multitasking, multithreaded operating systems have the ability to block execution while waiting for events, specifically so the programmer doesn't have to do it manually.

Event-driven programming basically says "oooh, my life was so much better in the days of cooperative multitasking, can we go back to that please?"

[#] Wed Apr 11 2018 21:58:38 UTC from LoanShark <>

[Reply] [ReplyQuoted] [Headers] [Print]

2018-04-10 22:18 from kc5tja @uncnsrd
That's more a control flow bug and not a race condition. A dead-lock


Disagree. Languages like Erlang are still doing locking for you, there's just a fanciful language abstraction that tries to pretend like the locking isn't happening.

Yes, when each actor is only served by a single thread, that's serialization and a point of resource contention. You gotta wait for that actor, which is a single point of contention, to do something. Firing messages to actors in the wrong order, then, can give rise to an analog of the resource-locked-in-wrong-order deadlock problem.


To put it another way, as soon as you try to do a callback the wrong way, you're screwed

[#] Fri Apr 13 2018 17:53:28 UTC from kc5tja

[Reply] [ReplyQuoted] [Headers] [Print]

"do a callback the wrong way" <-- in what way is that not a control flow bug, exactly like I said it was?

[#] Fri Apr 13 2018 17:55:07 UTC from kc5tja

[Reply] [ReplyQuoted] [Headers] [Print]

Messaging APIs also provide the locking abstraction for you, for what it's worth. You don't need a language like Erlang.

[#] Mon Apr 16 2018 16:53:25 UTC from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

So ... in that case, everything has to be an incoming or outgoing message?
What if it's an interactive application?

[#] Mon Apr 16 2018 17:23:28 UTC from fleeb <>

[Reply] [ReplyQuoted] [Headers] [Print]


User input becomes an incoming message, and the reaction to that user input is an outgoing message.

[#] Mon Apr 16 2018 17:25:54 UTC from fleeb <>

[Reply] [ReplyQuoted] [Headers] [Print]


That is to say, you might have something waiting to write stuff to the console, so it receives a message telling it, "Write this to the console". Or window.
Or edit box. Or whatever.

Whatever the user entered can be submitted as a message into this same event loop.

In reality, this already happens. Most of the GUI toolkits I've worked with uses an event system like this.

[#] Mon Apr 16 2018 17:32:33 UTC from LoanShark <>

[Reply] [ReplyQuoted] [Headers] [Print]


kc5tja: the way I see it, when you recast problems within another paradigm/framework, there is always an analogue of your typical pitfalls, such as locking. it looks a little different in the other framework, but it's conceptually the same thing.

if data locking is handled by your actor framework on message receipt to an actor, the analogue of the deadlock problem is design problems with which data is encapsulated within which actor, and how to coordinate locking across multiple actors

[#] Mon Apr 16 2018 17:33:01 UTC from LoanShark <>

[Reply] [ReplyQuoted] [Headers] [Print]


to put it another way, it's wrong to argue that a deadlock is not a control flow problem.

[#] Tue Apr 17 2018 19:09:37 UTC from kc5tja

[Reply] [ReplyQuoted] [Headers] [Print]

I never made that argument.

What I did say was that altering the paradigm used to think about the problem will influence how you think about those problems, often for the better. It's why functional programming is now starting to (finally) take off, because we've reached a point where traditional ways of thinking about app development is now a bottleneck. By isolating locking into well-proven and fully debugged abstractions, like message passing, you relieve YOURSELF of the burden of having to write correct locking code, relying on the expertise of someone else for that aspect of the program's correct behavior. This frees you up to think about higher-level details.

Further, unlike manual lock management, with message passing, you can step through your code in a debugger and 99% of the time find control flow issues that DO lead to deadlocks. With manual lock management, you have significantly reduced trust that your debugger is telling you the right things.

That's my experience at least.

[#] Tue Apr 17 2018 19:12:12 UTC from kc5tja

[Reply] [ReplyQuoted] [Headers] [Print]

I will say this much though; while message passing relieves one of the burden of manual lock management, you do still have to deal with race conditions, which are a separate issue and which can lead to a related but different sort of problem: live-lock. But, still, I'll gladly use any approach to writing software that halves my debugging load (deadlock and livelock combined versus just having to worry about livelock).

[#] Fri Apr 20 2018 20:28:30 UTC from LoanShark <>

[Reply] [ReplyQuoted] [Headers] [Print]


some more message passing pitfalls - dealing with this crap just today.

you've got a network service that responds to queries. it's got 75 job slots (worker threads) and the queue in front of those 75 slots can grow to ~1000 entries.

most of the jobs this thing processes, are no problem. but every now and then, some client spins up a bit batch job that fills your job slots & queue with Problem Messages.

Problem Messages are implemented for a different customer and involve a different processing path from what's typical (so most of the time things work out OK, but when your queue fills up with a sufficient number of Problem Messages, shit gets weird.)

The processing flow for Problem Messages involves calling out to a network service that wants to call *back* to the service you just called it from - and block until it gets a response.

So, think about the processing queue for the service I just described: 75 job slots, and ~1000 queued queries in front of all that. Non-problem queries can be processed just fine. But Problem Messages can't be processed to completion until the callback query is *first* processed - and under heavier load conditions, the callback query is stuck out in that 1000-deep queue somewhere, waiting in line... and under certain conditions, most, if not all, of the 75 threads will be waiting on something that's waiting on a message that's way down there in the queue, so the overall system suddenly becomes VERY SLOW or entirely grinds to halt. >But in a way that's very hard to diagnose, because it's highly conditions-dependent and most of the time it looks just fine (albeit, at the best of times, it problem has higher latency than it should, but nobody knows that.)

So it ends up subtle to debug: conditions- and timing-dependent in a way that's very similar to classic thread locking problems.

*** don't even think about *** trying to tell me that I just needed to learn to think within the message-passing paradigm, because in any shop that employs a sufficiently large number of programmers, you will have a training problem... and frankly the vast majority of developers are used to thinking within a synchronous framework and simply don't understand the reasons why you can't ever create a cross-network-service callback loop. the fact that the logic is cross-service at all, is often hidden deep down within your call stack--people end up working with this kinda large codebase, and trying to treat it as if it were monolithic. Case in point - I just described to you the diagnosis of an actual production problem that's biting us this week, **and the guy who created the problem is not even inclined to believe me.**

[#] Fri Apr 20 2018 20:29:12 UTC from LoanShark <>

[Reply] [ReplyQuoted] [Headers] [Print]

*big batch job

[#] Fri Apr 20 2018 20:31:08 UTC from LoanShark <>

[Reply] [ReplyQuoted] [Headers] [Print]


*it probably

[#] Mon Apr 23 2018 18:46:39 UTC from fleeb <>

[Reply] [ReplyQuoted] [Headers] [Print]


So, you have a pool of threads, that occasionally gets hung because you have a situation where a particular kind of job isn't asynchronous?

Couldn't you convert that job into an async job?

Go to page: First ... 26 27 28 29 [30] 31 32 33 34 ... Last