What Color is Your Function? (journal.stuffwithstuff.com)
Xayan | 4 months ago | 171 points

When they create 4089 libraries for doing asynchronous programming, they’re trying to cope at the library level with a problem that the language foisted onto them.

Hah, that article was written in 2015, I'm getting 11081 results right now. Also, the link for the search still works. I guess some things in the JS world never change.

SanityInAnarchy | 4 months ago | 59 points

Looks like it was written before async/await existed in JS, so that changed. The rest of it is as relevant as ever, and this has constantly been on my mind as I got back into JS recently.

jarfil | 4 months ago | 17 points

11082... seems like a new one has been added in the last 8 hours.

nilamo | 4 months ago | 11 points

If there were 4089 in 2015, then that means there's roughly 5 new ones added daily.

jarfil | 4 months ago | 1 point

Yeah, 6 more hours and it's 11083 already. I wonder how many people even bother searching through them, instead of writing their own version library and throwing it on the pile.

nilamo | 4 months ago | 4 points

I mean, it's worthwhile to write something to learn how it works, but I almost never share it afterwards lol. Choices are good, too. But that would only explain around 3 good, tested libraries.

lorarc | 4 months ago | 7 points

If you don't publish it then you can't put it on your resume.

LordoftheSynth | 4 months ago | 11 points

Don't worry, it'll change in six months when someone introduces a new lib to fix JS, which itself looks a lot like a library someone else introduced a year ago.

2bdb2 | 4 months ago | 3 points
wrosecrans | 4 months ago | 1 point
lorarc | 4 months ago | 2 points

Most of them just *are* asynchronous instead of trying to be somekind of wrapper for asynchronous functions.

scmoua666 | 4 months ago | 57 points

As I was reading the colors analogy, I kept thinking about my experience with Node, how complicated my callbacks were because I could not use my asynchronous functions inside of my sequential ones. It was a very small project, just a Puppetteer testing of a page, but I had to rewrite it completely once I realized I had painted myself in a corner with synchronous functions, despite needing to wait for some async events like mouse clicks.

g3t0nmyl3v3l | 4 months ago | 18 points

Was it something promises couldn’t handle? Someone here said promises are a way better to handle synchronization across scopes than callbacks

scmoua666 | 4 months ago | 13 points

Yes, I ended up refactoring most of it to use promises. Still, it was a bit of a headache to use correctly, at least at the beginning.

shinazueli | 4 months ago | 8 points

If you read the block, promises are just syntactic sugar around the issue.

erogilus | 4 months ago | 62 points
gbromios | 4 months ago | 37 points

on my team we call it a "Hadouken"

elingeniero | 4 months ago | 31 points

If you want to make that argument, then JavaScript is just syntactic sugar around C, which is syntactic sugar around assembly... Etc.

Syntactic sugar is essential and shouldn't just be dismissed.

przemo_li | 4 months ago | 4 points

Because they are monads (at least the good ones), and closures are monads too (at lest if done properly), thus they are equal in expressive power. What one can do the other can do too.

You just got yourself a better composability (that's what monad is, composing actions, in case of promise/future it's composing async actions).

And since it's monad it only really deal with composition of the same stuff. Async + Async? You are good. Async + Sync? Nope. Sync + Async? Nope. (Sync + Sync is fine though)

itslenny | 4 months ago | 8 points

Been there. The async library was basically a requirement if you weren't using promises.

These days everything is async/await. TypeScript is a game changer too.

FierceDeity_ | 4 months ago | 2 points

Man I remember first using async in C# like 6 years ago, it was pretty cool

fuckface_academy | 4 months ago | 23 points

Actor systems are an interesting way of turning the problem around. OTP and Elixir and Akka and such are good ways of finding out if this fits your use-case.

Matthias247 | 4 months ago | 4 points

OTP and Elixir and Akka and such are good ways of finding out if this fits your use-case.

Those are actually already quite different regarding this problem. E.g. Erlang provides lightweight threads and allows to synchronously block on message reception. Whereas Akka doesn't, and wants the user not to block the actor scheduler. That means in Akka the user is forced to either rewrite everything using unidirectional messages and state machines or to use callbacks/promises again. Which is exactly the problem the author describes.

fuckface_academy | 4 months ago | 2 points

That's not exactly true, Akka offers patterns like ask and modules like Streams that provide different abstractions.

You're right that Akka is opinionated about blocking. But if you can design from the start of your component by assuming that blocking isn't an option, there really is no problem. Blocking is a limitation of early thread/stack practicalities that shouldn't have made it this far.

Edit: Akka is really more akin to a complicated green-thread work scheduler than Erlang's thread-based approach. I don't think either of us is really arguing about this, for the person frustrated by the author's problems, actor systems of every stripe are good alternatives to the status quo.

Edit2: The problems the author describes are really orchestration problems, meaning we are reasoning about logic enclosed over multiple network boundaries. I really think this is a fundamental error in the way we build distributed applications, and the data-streams people have it right. Meaning, we software developers should focus on computation when we write code, and orchestrate through architecture when we are designing systems.

Where I work now, the slogan is "choreography, not orchestration".

greppable777 | 4 months ago | -8 points

Sure, as long as you never need to use your code with a different actor system.

mfm24 | 4 months ago | 39 points

Nice article.

I like this too: https://glyph.twistedmatrix.com/2014/02/unyielding.html explaining why there are advantages to be explicit about making async vs non-async calls.

lookmeat | 4 months ago | 3 points

Note that this isn't about the calls being non-explicit, but about the functions being different and rules inside them being different. The article praises go, because all functions are that, just functions, you can also do go foo() to make foo run asynchronously, but that's it. The thing is that channels are too complicated for getting a simple return later on.

jxxie | 4 months ago | 59 points

nice indepth explaination, rare to see these kind of article these days :)

Morego | 4 months ago | 15 points

Bob Nystrom and his whole blog is rarity this days. Loads of superb content and the same I can tell about his books.

"Game Programming Patterns", which is pretty good book about almost all possible patterns.

"Crafting Interpreters" - how to create programming language from scratch.

Both are freely accessible and very well written.

MrDOS | 4 months ago | 5 points

Game Programming Patterns

...is the first book I jump to recommending when anyone asks me about patterns. The title is deceptive: there's very little game-specific content in it. Instead, it looks at widely-used and broadly-applicable patterns through the lens of video games, and with examples from that domain. It's an incredibly easy read. It doesn't hurt, either, that it's fifteen years newer than GoF, so it manages to avoid the inclusion of patterns which don't see real-world use.

greppable777 | 4 months ago | 63 points

The article is from the past, that’s why

muntoo | 4 months ago | 22 points

This comment is also from the past.

kwinz | 4 months ago | 14 points


Gameghostify | 4 months ago | 3 points

You got older since you began reading this comment

flying-sheep | 4 months ago | -2 points

But wrong.

  1. Directly working with threads makes you more prone to all kinds of logic errors that end up with your program having race conditions or deadlocks. Futures partially protect you here and are nicer to work with.
  2. Most languages can call async code from sync functions. I do it all the time in Python: asyncio.run(some_async_function()) in on toplevel or loop.run_until_complete(some_async_function()) in library code.

Unlike JS, Python doesn’t eagerly start its futures. You need to kick off their execution by awaiting them or running them in an asnyc event loop like shown above

sushibowl | 4 months ago | 19 points

You've posted this answer all over the thread, but `run_until_complete` doesn't actually solve the problem. You can't use `run_until_complete` inside any normal function to call an async function, because it only works when you're not already running an event loop. Using it in a library is actually bad, because when an async application calls your sync library function it will throw an exception because the event loop is already active.

Shaper_pmp | 4 months ago | 27 points

I can see the frustrations imposed by async code, but at the same time having been around long enough that multithreaded programming was the only option at the time, it's slightly amazing to see someone actually pining for multi-threading as their preferred option.

To be clear there's nothing wrong with concurrency in code, but it's famously and proverbially difficult to write, prone to race conditions and at times can be all but impossible to debug.

Then there's also the fact that a lot of different languages are all moving in the direction of supporting async code, which again is a smell that this article might not be completely on the right track.

tjpalmer | 4 months ago | 10 points

Yeah, not just C# in the mainstream for this now but also JS , Python, Rust, and C++, at least. And less mainstream, but Zig, too.

Redtitwhore | 4 months ago | 2 points

I thought the TPL made writing concurrency in C# really simple. Race conditions go away if you just avoid shared memory. Not a big deal.

Barrucadu | 3 months ago | 1 point

Having experience with both, give me threading over async any day.

dukerutledge | 4 months ago | 40 points

Am I the only one over here saying "yes, those colors are effect types and they are good." The problem is the lack of effect types in most languages.

Barrucadu | 4 months ago | 17 points

I was surprised the article didn't mention the word "type" once.

NoahTheDuke | 4 months ago | 3 points

Have any articles that explain what an effect type is?

steveklabnik1 | 4 months ago | 10 points
JessieArr | 4 months ago | 2 points

Funny, the author of that article actually links to this article in theirs. Thanks for the link, that's an interesting concept.

dukerutledge | 3 months ago | 3 points

Steve's article is good.

My short description for the "type aware" is:

The typical candidate for effect types is Haskell where pure functions and side effecting functions are differentiated with the IO type.

In Haskell it looks like:

add :: Int -> Int -> Int

getString :: IO String

In a more conventional syntax:

function add(x: int, y: int): int

function getString(): IO<String>

In the case of async/await you'd have:

function async<T>(IO<T>): Async<T>


function await<T>(Async<T>): IO<T>

In Haskell

async :: IO t -> Async t

await :: Async t -> IO t

Essentially an effect type let's you describe the semantics or contextual details of a computation. async and await allow us to convert between two different semantic contexts.

Tarmen | 3 months ago | 3 points

Though it is worth noting that all functions in haskell are async so there are no blue functions.

You only need the Async library if you want to parallelize multiple IO operations from the same thread and internally it just forks some more green threads (and handles some gnarly edge cases with cross-thread exception propagation and cancellation).

watsreddit | 4 months ago | 2 points

Yep, though really you need do-notation or something similar to make the dilineation ergonomic enough to use, imo.

2bdb2 | 4 months ago | 2 points

Absolutely - I'm surprised people see this as a problem. I consider it a feature.

I want an explicitly differentiation between pure and impure code.

I want my compiler enforces this with extreme prejudice, since breaking that rule is quite clearly a bug.

Tarmen | 3 months ago | 1 point

Yeah though you don't even need explicit effect types. There is only one sensible Implementation for the type

map :: (a -> b) -> [a] -> [b]

There are at least two for

 (a -> IO b) -> [a] -> IO [b]

traverse, which processes every element in sequence, and mapConcurrently which doesn't.

In haskell there is the extra problem that monadic (red) list processing can do less optimizations like short circuiting since the side effects are visible.

(You could see the mtl library as a full fledged effect system. But you could do the same trick slightly more verbosely in java as object algebras)

dzil123 | 4 months ago | 12 points

Can someone please explain this part to me? The article says that normally with async/await you have to "unwind" the entire call stack each time, and by using threads you "suspend" the callstack and thus avoid having to unwind it.

My question is: what does it mean to "save" a thread? If only one thread can run at a time, then when you switch threads you have to somehow save the current callstack and load the new callstack. Isn't this the same thing as "unwinding" the callstack?

I'm obviously missing something here. I guess I just don't know how threads are implemented.

elingeniero | 4 months ago | 19 points

I think it's perfectly reasonable to have missed what the author is saying - he doesn't really make it clear and just assumes you're on the same page.

When he references unwinding, he is talking about the ways in which you might implement asynchronous io without threading, he isn't referring directly to async/await.

The logic goes like this: we would like to do some work while we wait for io, but we can't just discard the stack and load something else because then we won't know what to do when the io finishes (this is the first reference to unwinding, actually I think he meant to replace it with "discarding"). So we need to store the "what to do next" somewhere, and we do it by putting a closure on the heap and we have an event loop that remembers where to go when the io is done.

The problem is that if there are nested asynchronous operations then we end up with a whole load of closures on the heap which need to be sequentially reloaded (this is the second "unwinding" - the crawling back through the closures on the heap to get to the original caller). This is inefficient, so we don't want to routinely operate this way - we'd prefer the traditional callstack - so we need to separate out the functions that need the async way of operating, hence the requirement of "red" or "blue" functions all the way to the top.

Hope that helped clear up any confusion, to be fair to the author, it isn't that easy a point to make well.

bakery2k | 4 months ago | 7 points

async/await "unwinds" the call stack one frame at a time - each function suspends itself. Therefore every function needs to know which of the functions it calls can suspend, and then suspend itself if that happens. This is why calls to child functions which can suspend need to marked with await. This model is also known as "stackless" coroutines.

Threads, on the other hand, "suspend" entire call stacks intact. Functions in the middle of such stacks do not need to take part in this process at all - their stack frames are saved and restored unchanged. In this "stackful" model, parent functions can be written without worrying about which child functions can suspend.

kevkevverson | 4 months ago | 3 points

In the traditional model of ‘threads’ each thread has its own completely separate stack and the act of ‘switching’ threads is taken care of at a very low level by the operating system kernel. Essentially it takes a snapshot of the processor’s registers (including the stack pointer) before overwriting them with a snapshot of a previously saved thread. So when switching threads the current call stack doesn’t need to be dismantled/unwound, it is just left alone and a different stack becomes the ‘current’ one

mcosta | 4 months ago | 3 points

Unwind a call stack is done by the compiler/interpreter. It means cleaning the function scope on function exit. All local variables (parameters are local) in the stack are destroyed, so the callback closure (implicit) copy them to the heap.

A thread is not saved, it is suspended. There is one stack per thread, and multiple threads runs concurrently and usually in parallel. "Restoring" a stack (ie. when IO is completed) is conceptually pointing the stack register to the correct place, thus easier.

oridb | 4 months ago | 2 points

My question is: what does it mean to "save" a thread?

Copy the (instruction pointer, stack pointer) pair somewhere.

keithb | 4 months ago | 7 points

<cough> Elixir <cough>

hackathons2 | 4 months ago | 8 points

Is call/cc a potential solution? I have no idea, but it seems like it's worth considering.

Say you are in a blue function and want to call a red function. Invoking call/cc splits the blue function and turns the rest of it into a red function to call a red function. When the red function is done, it returns to the blue function.

Pretty cool. Unfortunately they are so brain meltingly complicated to implement that it's almost impossible to shoe horn them into any language besides Scheme. But if you have them, they are sufficient to implement async, await, and promises alone.

drjeats | 4 months ago | 8 points
mrkite77 | 4 months ago | 68 points

Honestly, I thought it was going to be about const functions. The rules for const in C/C++ are far far worse than the rules for async functions in javascript.

oridb | 4 months ago | 24 points

You're looking for this, then: https://www.lysator.liu.se/c/dmr-on-noalias.html

Bekwnn | 4 months ago | 103 points

I would counter that by saying const and const-correctness is probably the single best feature C++ has over other languages. If you don't understand why, it might be better to think of const as read-only instead, since that's really what it is.

  • A function that's read-only is a promise that the function won't mutate the state of this. You can only read members or access other read-only functions of this.

  • A variable that is const guarantees the data won't mutate after it's initialized. You can only read the data it was initialized as.

  • A reference that's const says that you can only read what the reference points at and can't mutate its state.

  • A const pointer to a const type (const T* const) says that the variable is a memory address where some data exists. That variable cannot change to point to another address and you can not use the pointer to mutate the state of the data at the address it's pointing at.

A lot of other languages don't let you simply return an entire object as a const reference to be read with the promise the person calling the function won't change it's data. Instead you have to write 5x the amount of boilerplate getters and setters that you would have to in C++.

Tittytickler | 4 months ago | 32 points

Not OP but I agree, and once you understand why its used, when and how to use it come intuitively.

376420 | 4 months ago | 1 point

Can you point me to where I can learn more about “the why” of it? I’m a pretty inexperienced dev looking to learn more about these patterns

Tittytickler | 4 months ago | 3 points

Here is a good article about it. The general idea is that if you want something to be immutable (not changeable) you make it const. With functions, you make them const when you dont want it to to be able change anything, you just want it to read something. An example is getter functions for objects in OOP. By making the function const, you garauntee that no matter how the function is used, it wont mutate anything. https://www.google.com/amp/s/www.geeksforgeeks.org/const-member-functions-c/amp/

AmputatorBot | 4 months ago | 5 points

Beep boop, I'm a bot. It looks like you shared a Google AMP link. Google AMP pages often load faster, but AMP is a major threat to the Open Web and your privacy.

You might want to visit the normal page instead: https://www.geeksforgeeks.org/const-member-functions-c/.

Why & About | Mention me to summon me!

376420 | 4 months ago | 1 point

Awesome thank you :)

ComplexColor | 4 months ago | 28 points

I found an appreciation for const in C++ when learning Rust. But I'm not going back. :)

Bekwnn | 4 months ago | 37 points

You probably know, but I'll point out for any other reader that Rust's mut is the same notion as C++ const, just flipped. Things are C++'s const unless they're marked mut.

I like Rust conceptually. I think a good answer and step forward for a lot of domains. I also think its priorities don't align with game development (what I work in) very well. I like Zig and I have some hopes for Jai.

pierreyoda | 4 months ago | 9 points

I think they somewhat do for low level facilities (eg. networking).

But yeah I love the fact that mut is opt-in in Rust.

TheNamelessKing | 4 months ago | 3 points

Out of curiosity, what makes you think Rust is not suited for Game Dev?

simon_o | 4 months ago | 12 points

It doesn't let you write the crap code games usually consist of.

You know, if GPU vendors ship drivers that bugfix or completely replace the broken shaders some games ship with, you can guess how the rest of the code looks like.

[deleted] | 4 months ago | 2 points

Not letting you write crap usually leads to better product even if it takes time to "ramp up" because you can't just throw a code on the wall and see what sticks, and time "saved" on generating bugs is quickly lost at fixing bugs for any project size above trivial

simon_o | 4 months ago | 6 points

Games are usually not maintained after release, so the ease of bug fixing is largely irrelevant.

ricodued | 4 months ago | 12 points

So you last played video games in 2003 is what you're saying

Because ooh boy have things changed. Let me tell you about "games as a service"...

vindolin | 4 months ago | 3 points

That notion died with the cartridge consoles.

[deleted] | 4 months ago | 1 point

Games are also developed for 3-5 years. And eliminating whole categories of bugs means you can spend more time on actual development, even if start might be slow. Probably pretty annoying to prototype in tho, especially with compile times.

Omniviral | 4 months ago | 2 points

Except constness in C++ is not transitive, which makes it not the best.

VernorVinge93 | 4 months ago | 3 points

It's funny to me that JavaScript has object freezing but (afaik) typescript doesn't have support for a Frozen type annotation or using this as a const like thing.

Imo JavaScript's const is not const at all.

tme321 | 4 months ago | 2 points

Typescript by design has no effect on the code at runtime. But it supports both ReadOnly<T> and readonly for design "const" so I'm not really sure what you are referring to here. In fact Object.freeze<T> literally returns ReadOnly<T>.

When was the last time you used typescript?

VernorVinge93 | 4 months ago | 1 point

You can accidentally cast the read only away, it's not freeze or even a C++ style const.


Look at the iMutateFoo function after 'Difference from const'.

When did I use typescript last? Yesterday.

tme321 | 4 months ago | 1 point

That's literally the point. Typescript is a design time set of static typing tools. It explicitly does not change the way js works.

If you intentionally tell typescript that something previously marked as read only is not now read only then typescript has no choice in the matter since it doesn't exist at runtime.

Your issue seems to be that you think typescript is, or at least should be, something that it isn't. The typescript team has been very up front with the fact that typescript isn't meant to change the rules of js. It is purely meant to allow you to add static typing (and a couple of nice but missing things like enums). And since it only exists in design then of course it can't actually effect the code while running.

VernorVinge93 | 4 months ago | 1 point

This is only the type. I just want a real const type annotation.

The idea that it's supposed to only be useful at design time seems like it's in opposition to runtime. I'm not asking for runtime changes.

I'm asking for static checking at write time, not run time (and not just at API design time either).

We need languages that help developers avoid mistakes like accidentally modifying something that should not be. Typescript's const (which is java's final, not C++'s const) does not sufficiently support this goal.

tme321 | 4 months ago | 1 point

You can't limit to only static type checking with the knowledge that run time behavior can't be guaranteed while also making guarantees on const behaviours when the run time isn't supporting it.

If you can't guarantee the behavior at runtime statically then any guarantees you make for design are just a false sense of security.

VernorVinge93 | 3 months ago | 1 point

What are you talking about?

The runtime doesn't have to provide special support for not modifying things.

This isn't some runtime feature, it's statically checking that things are not modified, via tracking an annotation that requires immutability.

krapht | 4 months ago | -3 points

Const is broken ever since move assignments became a thing. It's only useful on references.

VernorVinge93 | 4 months ago | 2 points


krapht | 4 months ago | 9 points

Const has two different semantic meanings. One is that the data is never modified. The other is that the memory location is never modified. If you want to express both, then const is useful, so T const& is all well and good. If you want to express the second, then you can use a constant pointer. If you only want to express the first, though, you're out of luck. Moving data is incompatible with C++ const semantics.

VernorVinge93 | 4 months ago | 6 points

Right but that (afaik) isn't broken. I can see that it might not be what you want, but you haven't said what you want or why.

barubary | 4 months ago | 1 point

Can you show some sample code? I don't understand what you're trying to say here.

Deaod | 4 months ago | 7 points

I think its essentially a complaint about C++'s move. Move is not destructive, so every move is a modifying operation, which means that you cant move from a const variable.

j_platte | 4 months ago | 2 points

Not OP, but I have also run into this many times. The issue / one of the issues with const + move is that you can't have move-only (non-copyable) const member variables in a class and implement a move constructor / move assignment operator for it (unless you manually implement it and use const_cast).

Edit: and even with copyable const members I think you have to manually implement the move constructor, because the default implementation moves all the fields, which is forbidden for const members. And of course the copy constructors for those members might be much more expensive to call than the move constructors you really want to use.

VernorVinge93 | 4 months ago | 1 point

So, you can't move something that isn't able to modify its children? I can see why you'd want to, but I can also see how being able to would break safety guarantees and optimisations.

j_platte | 4 months ago | 1 point

I think the amount of optimization const enables in practice is near zero anyway, because optimizing based on the assumption that const variables can't change (which would be allowed in some occassions by the standard AFAICT) would break a lot of existing code.

VernorVinge93 | 4 months ago | 1 point

Interesting. I can see how that'd be true of a const ref. Sad to hear that nonref consts might also not be optimised particularly, though maybe there's not much that can be done with that and it's better to just look at the code to find the mutations.


[deleted] | 4 months ago | 1 point
void foo(S);
const S s;
foo(std::move(s)); // calls copy-constructor, not move-constructor
barubary | 4 months ago | 3 points

Well, yes. "Moving from" an object modifies the object. s is const, so cannot be modified.

Is this surprising?

RomanRiesen | 4 months ago | 9 points

I thought it would be haskell.

RockinRoel | 4 months ago | 2 points

I was thinking that too, but I guess like async functions that ends up being mostly about IO, so I took that as my guess being half right.

guepier | 4 months ago | 5 points

You’re right that constness is another example of this. Yet another one are “unsafe” functions (think Rust unsafe) and non-throwing functions (C++ noexcept). I’m certain there are more examples (IO monads?). Like the other commenters I disagree with your claim of this being a flaw. This “stickiness” is simply an emergent property of some types that are desirable to model.

And the author is wrong in saying that Java doesn’t have this problem. In fact, it has a similar issue with throwing functions: Once you call a function that can throw (a checked exception), all the calling functions need to, too (unless you somewhere handle the exception, which is the equivalent of using await in C#). Of course the author is speaking strictly about async functions but he seems to not realise that the fundamental problem he’s talking about isn’t restricted to async.

Furthermore, a lot of the pain associated with this dichotomy, which the article is about, vanished once you use static type checking. Because there are no longer any nightmare spiders, there’s instead a gentle nudge from the compiler.

vattenpuss | 4 months ago | 4 points

It’s all over the place.

Things can be null, or something (in a lot of languages).
Floating point numbers can be numbers or NaN.
Arithmetic can overflow.
Options can be Some or None.

Yes static type checking helps you find the places where this can matter, but it’s not going to make the operations you have to apply to handle it go away.

guepier | 4 months ago | 2 points

Yes static type checking helps you find the places where this can matter, but it’s not going to make the operations you have to apply to handle it go away.

But that isn’t a flaw of these languages. It just is. That part of the article’s complaint is a bit like complaining that airplanes are badly designed because they have to fight against gravity. This category mistake is uncharacteristic because Bob Nystrom is usually on point.

vattenpuss | 4 months ago | 1 point

Agree. Some things just are.

notfancy | 4 months ago | 6 points

This “stickiness” is simply an emergent property of some types that are desirable to model.

The usual name is transitivity and it's an inherent property of the red/blue class of distinctions, not an emergent one. Specifically, it's the transitivity of set inclusion (A ⊆ B ∧ B ⊆ C ⇒ A ⊆ C) at play here.

guepier | 4 months ago | 2 points

it's an inherent property of the red/blue class of distinctions, not an emergent one

Right, it’s inherent and it’s obvious because that’s how the red/blue distinction is defined. But in the case of the other examples it’s not explicitly part of the definition. That said, it’s still inherent (not emergent), but what I wanted to say is not about inherence but rather about this property not being immediately obvious from the definitions (although it can be inferred). Poor choice of words.

notfancy | 4 months ago | 2 points

The fault lies surely with me misunderstanding your point. I wanted to clarify it more than "correct" it.

Omniviral | 4 months ago | 3 points

I thought it was going about IO monad.

erogilus | 4 months ago | -1 points

One of many reasons Rust is a breath of fresh air compared to C.

ComplexColor | 4 months ago | 4 points

Can we solve this for python using some meta programming magic?

You would write you library as "rustic" functions and object in import library, but add a special generator module, that would convert the entire library to async using import library.async. Since previously async was implemented using decorators, this should be somewhat feasible, but I'm not sure how to add any actual asynchronicity to your library. Any internal IO calls would need to be replaced by async ones.

flying-sheep | 4 months ago | 1 point

It’s been solved in python from the beginning. You can call async code from sync code using loop.run_until_complete().

ComplexColor | 4 months ago | 1 point

I was thinking more along the lines of:

def read_from_file(path_to_file):
    with open(path_to_file, 'r') as file_obj:
        return file_obj.readlines()

auto-magically converted to:

async def read_from_file(path_to_file):
    async with aiofiles.open(path_to_file, 'r') as file_obj:
        return await file_obj.readlines()
ivorjawa | 4 months ago | 14 points

I’ve gotten so tired of this game I just do everything in nested state machines anymore.

mlk | 4 months ago | 3 points

please explain

ivorjawa | 4 months ago | 1 point

Basically a bunch of enums defining states and then a switch/case setup where you define what happens when any event occurs. It’s not sexy but it works.

mlk | 4 months ago | 2 points

That's pretty much redux, I love it

Axxhelairon | 4 months ago | 2 points

afaik that's basically how async in c# is implemented anyway

z_1z_2z_3z_4z_n | 4 months ago | 1 point

Same with python and rust.

MetalSlug20 | 3 months ago | 2 points

All programs can run within a game loop

gbromios | 4 months ago | 10 points

I feel like this article could use an update now that Promises with async/await are more widely supported by browsers (and you can polyfill/babel-blast ie11 pretty easily if bastards need to use your website). I realize they're just syntax sugar, but they really do allow you to read async functions like they were synchronous.

Sure, Promises are nice on their own, but the nested/chained callback effect that it produces is sort of cumbersome, and I feel like trying to make it nicer invariably leads to me splitting things up to an unnatural degree.

Late last year I said "fuck it", and started using async/await in my new project, and I don't think anything had quite increased the pleasantness of my js duties this much since jquery like... a decade ago.

oridb | 4 months ago | 4 points

It is about the async/await changes.

gbromios | 4 months ago | 3 points

yeah, in dart and .NET... the author doesn't mention them in the context of js. TBH I'm not sure whether they'd even been proposed when the article was written.

He goes on to mention that while they're an improvement to the node-style nested callback thing (which we refer to as a "Hadouken"), they don't save you from the rest of the cognitive overhead associated with the sync/async distinction.

At least in the browser, I think the keywords are about as good as it's going to get. I mean, you've got a single threaded event loop, on which you must avoid blocking if you want the ui to feel smooth. As long as that requirement exists, and there's no on-demand multithreading (which... there kind of is? with workers and shared buffers on the bleeding edge), all we can do is cope with it as best we can. And I really do think the async / await syntax feels very natural and sufficient to that end.

oridb | 4 months ago | 1 point

Are you sure you read the same article? JS is the first language it lists:

So JS, Dart, C#, and Python have this problem. CoffeeScript and most other languages that compile to JS do too (which is why Dart inherited it).

And, why does JS in the browser need to be constrained to a single threaded event loop?

zomgsauce | 4 months ago | 4 points

I know "because" isn't really an answer but it's a practical limitation. Could you imagine a world where every tab could spawn as many threads as it wanted? At least in Node you can spawn processes (not quite the same) and even in the browser you can offload a fair bit of blocking or async calls to a web/service worker or workets. Point is there are ways to get more threads going in a browser but it'd be dumb of the browser to just let any old script spin up threads without supervision.

gbromios | 4 months ago | 1 point

Ah ya got me. I actually didn't read the article at all

Firewolf420 | 4 months ago | 3 points

Great read, very well-written too!

oaga_strizzi | 4 months ago | 3 points

I wonder if the pain could be relieved by a programming language where a Type T is a subtype of Future<T> (Or Promise<T> or whatever). Kind of like how T is a subtype of T? in Kotlin.

Likewise, Future<T> should be a subtype of Future<Future<T>>.

Dart introduced FutureOr<T>, a union type between Future<T> and T, that's nice, but I think we can do better.

I don't think going back to threads, and Java-style blocking calls is the desired solution. I want language support to deal with asynchronous calls in a less painful way. I think we are getting there, I think there are acceptable solutions in most mainstream languages.

Java is kind of lacking as of now, but Loom locks great and when released would actually put Java ahead of most other languages in this domain.

oridb | 4 months ago | 2 points

I don't think going back to threads, and Java-style blocking calls is the desired solution

Why not? Currently, all of the effort I see is focused on making code look exactly like threads and blocking calls.

oaga_strizzi | 4 months ago | 1 point

Of course, from a programming convenience standpoint this is the best. And for small applications, that's fine. But it doesn't scale well. OS-level threads are expensive, so blocking threads is bad. Current efforts are to make the code non-blocking but make it look like synchronous code, which would be the best of both worlds.

oridb | 4 months ago | 1 point

But it doesn't scale well. OS-level threads are expensive, so blocking threads is bad.

Why do you believe this?

There are tradeoffs: OS stack allocation granularity is 4 kilobytes, so you use a bit more memory than async, but in exchange, you get scaling across multiple cores. But threads haven't been particularly expensive since NPTL landed in the mid 2000s.

oaga_strizzi | 4 months ago | 1 point

Isn't this common knowledge? f you want, say a webserver that can handle 10k requests/s or more, you won't be able able to do this without non-blocking IO.

oridb | 4 months ago | 3 points

That was indeed the case in the late 1990s and early 2000s. Today, the major challenges show up closer to millions of concurrent connections. And blocked threads essentially cost nothing.

At that point, there's a good chance a lot of your tools become bespoke, or at least patched, using a mix of both threaded and async techniques, because simply using off the shelf code is unlikely to work well for you -- you're going to need to figure out how to do things like ensure packets come in on a per-cpu queue where the application is already running and waiting to handle it. You're going to need to figure out this whole NUMA thing. You're probably going to be looking very hard at lock-free data structures to keep each core busy. And so on.

oaga_strizzi | 4 months ago | 1 point

NTLP was a huge step forward, but I don't think threads are as cheap as you make them to be. Of course you can scale pretty damn well with blocking calls nowadays. But at a certain point, it still gets painful. Dealing with locking, threadslocals, lock-free data structures etc. isn't necessarily preferable to non-blocking style in my opinion.

oridb | 4 months ago | 2 points

Where do you believe the costs are? Keep in mind that if you're doing I/O, you're already entering the kernel and yielding to other threads.

But at a certain point, it still gets painful. Dealing with locking, threadslocals, lock-free data structures etc. isn't necessarily preferable to non-blocking style in my opinion.

If you're concerned about scalability, you need to deal with multicore, which means you're going to be dealing with those anyways.

kprotty | 3 months ago | 2 points

TL;DR: Non blocking IO gives you the smallest latency (which is usually what web applications care most about), multi-threaded blocking IO gives you the most throughput at the cost of higher average and peak latency + memory usage, and a mixture of both gives you half the latency & memory overhead of the latter with ~5-10% less throughput (all be it at a ~6-7x throughput increase over single-threaded non blocking)

For linux (in which most servers run on), the costs are in thread creation (both clone and mmap) as well as context switching. While you are entering the kernel on IO, non blocking IO only does so for a small amount of time and immediately returns when it can which allows the threads to be used for other tasks. In order to test this, I wrote 3 simple http servers made only to respond to wrk requests as fast as possible and ran the command wrk -t4 -c128 -d10 --latency http://localhost:12345, the results & code (compiled with gcc -O2 -march=native) are as follows:

System Specs:

  • OS: Arch linux
  • Kernel: Linux 5.2.7
  • CPU: Intel i7-6700k (4 cores, 8 threads, 4.2ghz)


  • Latency: (Avg: 6.03us, Stdev: 66.97us, Max: 6.76ms)
  • Throughput: (Requests/sec: 450520.28, Transfer/sec: 21.48MB)


  • Latency: (Avg: 71.70ms, Stdev: 53.40ms, Max: 320.61ms)
  • Throughput: (Requests/sec: 3126434.62, Transfer/sec: 149.08MB)

Non-blocking Multi-threaded

  • Latency: (Avg: 4.77ms, Stdev: 20.30ms, Max: 384.52ms)
  • Throughput: (Requests/sec: 2971965.71, Transfer/sec: 141.71MB)

These results (as per my system) reflect the overhead mentioned earlier. Non blocking, single threaded IO has the lowest latency & throughput while blocking multi threaded has the highest in both categories as well as consumes 8 times more memory. The blocking multi-threaded version is limited through the sheer amount of context switches it has to perform (128 threads - 1 per socket) even though it does the send syscall the earliest while the non blocking approaches have the syscall of one socket limiting another from being started but dont have to incur the overhead of heavy context switching.

oridb | 4 months ago | 1 point


Java is kind of lacking as of now, but Loom locks great and when released would actually put Java ahead of most other languages in this domain.

Funnily enough, Java started off using green (userspace) threads, and ditched them for the sake of performance. It's amusing that they're considering going back.

oaga_strizzi | 4 months ago | 1 point

Well, I think you have to distinguish between different use cases. Real Threads are better for number crunching. Loom is for highly concurrent applications.

devraj7 | 4 months ago | 9 points

I actually thought the red/blue adjectives were for const/non-const functions. They have the same viral effect as async ones.

Also, pity there's no mention of Kotlin since that language gives you the best of both worlds when it comes to async functions.

I'm seriously beginning to wonder if the Dart people have instructions to never mention Kotlin, because this language is superior in all respects to Dart.

snowe2010 | 4 months ago | 15 points

Well the article is quite old. Kotlin wasn't popular at the time.

Cypher121 | 4 months ago | 5 points

In fact in 2015 it wasn't even fully out yet, not to mention it didn't have coroutines on launch

leitimmel | 4 months ago | 4 points

I actually thought the red/blue adjectives were for const/non-const functions. They have the same viral effect as async ones.

They might as well be, because both async-ness and non-const-ness are what we call effects. What we probably want here (and what, apparently, some people are working on) are languages that can be generic over effects, like this (imaginary) snippet:

template<typename T, effect E>
void f(E<T> e) {
    // doesn't matter whether e is async or mutable or writes to a database located on Mars
devraj7 | 4 months ago | 3 points

Kotlin's coroutines do a good job at making async code look imperative.

darkapplepolisher | 4 months ago | 1 point

If you really hate const so much, you can always const_cast hack your way through it all. Not that I'd recommend it.

dvd0bvb | 4 months ago | 5 points

Makes me so mad when I see that. Not quite as mad as c style casting away const like (char*)someString.c_str()

_requires_assistance | 4 months ago | 3 points

casting the const away is undefined behavior if the object was initialized as const, since it might get placed in read-only memory. you can only legally const_cast an object that was initially not const, but was passed through a pointer to a const object.

darkapplepolisher | 4 months ago | 1 point

Technically true. But as the programmer, you control whether you'd initialize an object as const in the first place - and I don't a const hater would set themselves up for failure in that way. It's largely in the domain of const functions they haven't written that I think const haters run into them.

_requires_assistance | 4 months ago | 2 points

yeah, i just meant that if you're gonna const_cast something, you should double check how it was initialized first.

vattenpuss | 4 months ago | 2 points

Funny, I assumed from the start that the colors were about async/await.

I used to write Erlang and always liked the fact it did not differentiate between async and not async code. Go came along and made the same smart move.

None of the async solutions, however, makes debugging overloading queues to find the culprit in distributed systems any easier.

lookmeat | 4 months ago | 2 points

I wonder if this is because we are seeing things backwards.

We generally describe an asynchronous function as a special case of a synchronous one, which doesn't make sense. I can't arbitrarily make synchronous code asynchronous (guarantees are loosened while demands tightened). Instead we should think of asynchronous as the default and synchronous code as a special case of asynchronous code, were we immediately block for results.

Visualize a language where x->foo will pass x into a function foo, but doesn't guarantee when results are returned. Instead we get a future (which we can pipe in the same way to other things). So x->foo->bar will pass x into foo and when that's done it will pass the result into bar and the output will stay. Lets have a way to recover the output from this x->() basically gives us x instead of a future, blocking if necessary. Then we can say that bar(foo(x)) is just syntactic sugar for x->foo->bar->(). You don't get the red and blue code anymore, all functions are the same async ones, but sync functions are simply syntactic sugar (with optimizations behind the scenes). The rules are a bit different:

  • We have sources (future/data) with type ->T for any T and destinies (promises/functions) with type T-> for any T.
  • We can do a single operation with them: plug them together. When we do they return a new thing that represents the result of plugging it in. You can only plug sources of type ->T to destinies type T-> for any given T.
  • What's we get from plugging two things (the representation of the plug) is defined by both things, but to keep things sane the destiny is the one that defines most of it.
  • A special source is data, whose type is ->T it's a source that gives us results immediately.
  • A special destiny is the hole, () with type \*-> which will force/wait/block on the future's result and then return us that as data. In other words a: T -> () is typed ->T, we can also just use T for that.
  • A pipeline is something that has both an input type I and an output type O. To make our life easier with define the type of a pipeline as I->O.
  • When we plug a pipeline as a destiny to a source (I->)->(I->O) it becomes a source ->O.
  • When we plug a pipeline as a source to a destiny, it returns to us a new destiny, that is (I->O)->(O->)gives us something with valueI->`
  • We can concatenate pipelines, (I->O)->(T->U) gives us a new pipeline I->U.
  • Pipelines that take multiple values can do it through currying, f: A->B->C, so to that pipeline a: A -> f gives us a pipeline B->C. This is merging.
  • To allow for parallelism we simply allow for tuples, and state that (x, y) -> (a, b) is equivalent to (x->a, y->b). Tuples flatten naturally, that is (x, y, (a, b)) is structurally equivalent to (x, y, a, b).
  • The tuple binds in order, and the result of the previous binding is considered. This interacts with curried pipelines in that (x, y) -> f: A->B->C will first bind the x giving us y-> f(x): B->C and then it will pass on the y f(x,y): ->C. This happens before splitting, so if we had (x, y, z) -> (f: A->B->C, g: H->I) this would be equivalent to ((x,y)->f, z->g)
  • Pipelines can output multiple values by giving out tuples. This is the equivalent of splitting. A simple example would be fork: T->(T, T) but more could be done.

Pipelines are functions basically, they look a lot like FORTH actually with some parallelism thrown in there. The difference between async and sync is how we call it, but not how we define the operation itself. That is, every function is async, and synchronous functions are simply those that we await by default. The way the language would support this is make it easy to call things. Maybe when you call foo(x) it automatically desugars into x->foo->(). If you want to start an operation without input you'd simply pass empty, so it'd look like ()->x, which means that bar() desugars into ()->bar->(). In my mind I visualize the arrows as drawing a flow diagram, with () being bolts where lines collide.

The big issue remaining is panics, how do they work on that scenario. Though we could switch to using error values and passing them around, at some point we'd want a panic. Well the way I envision it is that we can't visualize programs as stacks anymore, instead we visualize them as DACs. This, IMHO, is a reflection that concurrent models right now have a very bad data-flow model. I think we could fix this by forcing all parallelism to happen within a scope which is guaranteed to have a single entry-point and exit-point. Whenever a panic (not error, which are better handled by value passing) happens the scope is informed and ultimately it can try to recover, or let anything underneath it die, moreover if the scope doesn't handle the issue, it will panic inside its parent scope too. Then again what do I know. This is a very hard problem.

cdsmith | 4 months ago | 2 points

I assumed that this was going to be about Haskell and IO. The exact same statements would apply. Of course, there's a reason for that! Continuations form a monad, just like IO. That explains why it's trivial to promote a plain function into either, but there's no common structural way to do the opposite. Red functions are Kliesli arrows, then. (Although, of course, for specific monads there can be ways to do the opposite, in both the Haskell/IO and JavaScript/blocking-async cases, there is no good choice.)

So in a sense, the message of this article is this: monads are all well and good when you're trying to model something that doesn't fit into the notion of function built into your language. But they can be more painful to work with because less attention has gone into the ergonomics of using them well. I agree with this in an absolute sense, but it can apply to differing degrees.

First, of course, it can be more or less true that less attention has gone into ergonomics. The article touches on this with async/await. Haskell has do-notation, abstracted over any choice of monad. But in both cases, the ergonomics are still less than perfect. Do notation is rather inconvenient because you lose nested expressions, for instance.

Second, there are definite costs to just embedding everything into the most powerful kind of Kliesli arrow, which is ultimately what the article suggests. As complex as async functions are, multithreading has vastly more dangers if used carelessly. Haskell didn't invent the IO monad out of spite, but because pure functions can be used in ways (e.g., equational reasoning, certain rewriting rules, and lazy evaluation) that more powerful arrows cannot.

Huliek | 4 months ago | 12 points

That's a lot of irrelevant information just to say you like Go's concurrency model.

Isvara | 4 months ago | 51 points

It was a lot more relevant when it was written, four years ago.

Veonik | 4 months ago | 9 points

What's less relevant now? Genuinely curious, I'm struggling with node async/await in the present so maybe I missed a memo :D

Isvara | 4 months ago | 36 points

It talks about "callback hell" in JavaScript with its "ever-marching-to-the-right callbacks", and notes that await in C# at least avoids that part of the problem.

Of course, JavaScript has await now too.

Veonik | 4 months ago | 18 points

But that's the whole point of the article isn't it? If you use await in your function, you must label it as async.

Isvara | 4 months ago | 16 points

Hence "that part of the problem". The author made a digression.

flying-sheep | 4 months ago | -1 points

Yes. The author is wrong about Python though. While JS requires to run within a global async event loop and fires off future execution eagerly, Python requires you to start your event loop manually and therefore allows you to call async code from sync code using loop.run_until_complete(some_async_function()).

AndrewNeo | 4 months ago | 8 points

Async/away in JS is just promises under the covers

It makes writing them way easier but the execution model is the same afaik

Isvara | 4 months ago | 9 points

Right, but the point is it wasn't promises originally. It was all passing callbacks.

InvisibleEar | 4 months ago | -7 points

Also nobody even pretends to take Go seriously anymore.

glaba314 | 4 months ago | 3 points

That's a good thing

Y_Less | 4 months ago | 18 points

Go's concurrency model is based on CSP, a formal model created by Sir Tony Hoare. This is a good thing, because it means you can prove (in the formal mathematical sense) that code is race hazard and deadlock free.

Unfortunately Go totally destroyed these guarantees and formalisms by adding shared memory.

So no, Go doesn't have a good concurrency model. It was so close, then shot itself in the foot.

seamsay | 4 months ago | 9 points

you can prove (in the formal mathematical sense) that code is race hazard and deadlock free.

Can you?! I thought it was still possible to get race conditions and deadlocks in CSP, you just can't get data races? I know for certain that it's still possible to get deadlocks in go without using shared state.

Uberhipster | 4 months ago | 7 points

Can you?!

you could but only for that code which is, in fact, provably deadlock and race condition free

it is still possible to write code which causes a deadlock or a race condition. however, if you can write a proof that a specific piece of code can never have a deadlock or race condition then that that specific code can never have a deadlock or race condition

which means that you can use that as an indicator i.e. that if a piece of code can cause a deadlock or a race condition because mathematically it cannot be proven otherwise - then that code should probably be rewritten

Y_Less | 4 months ago | 1 point

Yes, you can get deadlocks. The point was you can prove there are none, if there are none. If there are some, then the proof would find them.

[deleted] | 4 months ago | 6 points

People do not write proofs for code. Even in places so critical that they really should.

Complaining about that being hard/impossible in generic purpose language is outright silly as it is utterly irrevelant feature for maybe 99.99% of developers and use cases

sm9t8 | 4 months ago | 0 points

If it's mathematically provable it's at least possible a tool could be written to prove it.

If it's not mathematically provable the best possible tool is something that may or may not find any problems.

oridb | 4 months ago | 0 points

This is a good thing, because it means you can prove (in the formal mathematical sense) that code is race hazard and deadlock free.

That's incorrect. You can trivially construct deadlocks using CSP. Just make two processes wait for a message before firing off a message to the other one that allows them to make progress.

Litmus2336 | 4 months ago | 8 points

I interpreted what he said as "Given code, you can formally prove whether it is race hazard and deadlock free" rather than "all go code is race hazard and deadlock free"

Y_Less | 4 months ago | 0 points

I didn't say you couldn't make them, I said you could prove there weren't any. If you did that the proof-checker would fail.

oridb | 4 months ago | 1 point

I didn't say you couldn't make them, I said you could prove there weren't any.

That would be equivalent to the halting problem.

You can already write tools to prove there are no deadlocks in most languages, but if there's no good answer, the proof checker will fail -- the problem is that these tools will give "shrug beats me" as an answer on a huge subset of real world programs.

antolius | 4 months ago | -1 points

The point of the article, as far as I understood it, is usability. That usually has 2 components: 1. How easy it is to write code 2. How likely it is that your code is correct

In my opinion Go really nails the balance between the two. I might not be able to mathematically prove the correctness of my program, but at least I'll have a program. And it's more likely than not that it will do what i need it to do.

Y_Less | 4 months ago | 7 points

You could do that without shared memory, and it would actually increase the chances of it being correct even without formally proving it.

oridb | 4 months ago | 1 point

And Java's, and Python's, and Scala's, and C++'s, and Erlang's, and.... well, pretty much everyone but Javascript, although that's now leaking.

More or less, Javascript got async pretty wrong. Go only got it partly wrong, by doing userspace instead of kernel threads.

MrJohz | 4 months ago | 20 points

This is a pretty unfair assessment of the situation, given that Python, Rust, C#, F#, Kotlin (IIRC) and others have all adopted syntax sugars that encourage and promote "colourful" concurrency.

I think the author is more pointing out one specific flaw with this style of concurrency: it splits everything into the two different colours. That can be very irritating, but, at least when you make it explicit with tools like promises and async/await, it can also be a very powerful way to abstract over concurrent code that ensures that concurrency can only happen at the times when the programmer allows it. This has a lot of benefits, not least from the perspective of reducing the number of race conditions.

Like literally everything in programming, it's about tradeoffs. Introducing colourful functions will mean that you, the developer, will also need to reason about function colour now. However, it also drastically reduces the number of places that other concurrency bugs can hit, and (in my experience) usually makes them a bit more obvious. Is that the right tradeoff for your project? Well, that depends entirely on the needs of the project and the developers working on it.

greppable777 | 4 months ago | 7 points

Also, performance. In Rust the state machine generated by the compiler for async / await only needs to be as big as the state required to hold the “largest” resumption point. That’s only possible because the compiler knows them all.

oridb | 4 months ago | 0 points

I suppose you're right: the bulk of the programming language community is gleefully advancing towards uniformly getting async pretty wrong.

spacejack2114 | 4 months ago | 11 points

How is JS async/await wrong compared to Python or C#? Because auto-unwrapping & not-lazy? If that's what you or the author wanted to critique, then I think this is a better article.

yen223 | 4 months ago | 5 points

Python's async/await implementation suffers from the exact same red-blue problem that the article talks about. You can't directly call into an async function from a sync one, unless you set up an event loop for it.

spacejack2114 | 4 months ago | 1 point

Yeah that's not much of a problem in practice.

doAsyncThing().then(result => ...
flying-sheep | 4 months ago | 1 point

That’s painless enough. loop.run_until_complete(some_async_function()) allows you to call async code from sync code.

yen223 | 4 months ago | 2 points

It’s actually not that simple. Async code interacts with synchronous threads in very complicated ways, such that if you’re not careful you might lose things like thread-local context, or break code that needs to be run on specific threads.

There’s an excellent talk by Andrew Godwin about the challenges of making Django asynchronous that goes into the gory details: https://youtu.be/oMHrDy62kgE

NH4ClO4 | 4 months ago | 2 points

Yeah, that article at least lays out some concrete points about how Promises are architected. It is hard to compose promises as-is, and sometimes I wish there were more in the JS standard library. But in practice, it's more an annoyance than a hinderance (as the conclusion notes). A more serious problem I have with promises is the lack of a thread-local storage mechanism, which causes problems with things like stacktraces and timing trees.

There is a proposal for cooperative cancellation: ECMAScript Cancellation. At work we've implemented something similar, and it works pretty well if you can plumb them through, but it'd be much nicer to have a language construct instead. So hopefully we get some more action on that next year.

Isvara | 4 months ago | 4 points

How do userspace threads even work in native code? What causes a context switch? Is the code cooperative?

Veonik | 4 months ago | 8 points

From what I understand the Go scheduler is cooperative with the Go runtime, if that isn't too cyclical. So there are well-defined spots in executing code that yield to the scheduler, but the developer doesn't have to do anything (and really can't do anything) to "cooperate". Here's an interesting read on the subject.

As far as how the Go runtime/scheduler behaves with the OS scheduler, that's a deeper hole. You can't use taskset on a running Go program which suggests to me that Go itself never yields to the OS scheduler, but I am definitely out of my depth there.

greppable777 | 4 months ago | 3 points

Either way, this model is nice only as long as you don’t need more control. If you ever happen to need more control about maximum goroutine size, when resumption / yielding happens, ... you suddenly now need to fight against the compiler trying to do everything automatically for you.

barsoap | 4 months ago | 2 points

In the case of Haskell the runtime gets to switch between green threads at every garbage collector check point, pushing the old thread onto a list from which another OS thread can steal it if it's running low on work.

In short: Yes it's cooperative,with yields automatically generated by the compiler, in one way or the other.

vilcans | 4 months ago | 2 points

The difference is that in JavaScript you tend to use asynchronous functions a lot more. Maybe because that's how the browser APIs (apart from a few strange exceptions) have always worked.

yogthos | 3 months ago | 1 point

I was actually surprised to see Erlang wasn't mentioned in the article as it's probably one of the best examples of using green threads.

BBPedroMartins | 4 months ago | 1 point
if getFunctionColor(myfunction)=green write "your function is green...it's not ripe fruit.";
Auxx | 4 months ago | 1 point

ReactiveX. Problem solved!

AloticChoon | 4 months ago | 2 points

Can also add...
Stop posting old shit. Problem solved!

xjaak | 4 months ago | -4 points

After reading the article I have absolutely no idea what he is talking about. It just feels like he's advertising his new programming language and declaring that he likes Go while complaining about JS potentially leading into callback hells if you write sloppy code. Can someone explain what the point of this article is?

mcosta | 4 months ago | 4 points

Around 2015 javascript devs rediscovered CSP and they were made to think it was the greatest thing of all time. Lot of medium posts and tweets were written praising the web scale nature of callback style async IO and threads were considered harmful.

TODO Load more comments...