To start, a number of people took issue with my working definition of actors, claiming that in implementation X, actors are composable. We can quibble about my definition, but I hope the deeper point was clear - side effects and state hurt composability and usage of actors as I defined them is stateful. James Iry has also made some of the same points about the statefulness of Erlang style actors.
But, if you don't like my definition, if you'd like to claim that an "actor" is actually just a function
A => Future[B]and that therefore "actors" are composable, then I don't really have a problem with that (although, why call this model 'actors'? Why not call it 'functions from
A => Future[B]' or 'the Kleisli arrow for futures'?). But if the actor model also includes the ability to asynchronously "send a message" to another arbitrary actor, and if the expression representing this message send does not evaluate to a future containing the result sent back by the receiving actor, then my subsequent arguments about the lack of composability still apply. The takeaway from my post, even if you think my definitions are bogus, shouldn't be "Aha!! Actors are okay! There is no issue with using them, even in stateful, non-composable ways".
This brings up another point - I don't think everyone commenting (both on my post and James' above) is working with the same ideas about what it means for a function or expression to be pure. The "standard" definition is that an expression (such as sending a message to an actor) is side-effect free if it is referentially transparent. And while there are some nuances to the definition of referential transparency, I think everyone familiar with the concept would agree that functions from
A => Unitcan't possibly be RT unless they are literally the constant unit function.
Looking through the various responses, I notice that no one really argued with my claim that side-effects hurt composability. I'd be interested if anyone can poke holes in my argument here (and by that I mean finding some problem with my logic, not disputing my definitions).
A number of people responded to my negative offhand remarks on OOP. Pointing out problems with OOP was not really the point of my post. Obviously I'm no fan of OOP and maybe someday I'll write more about that. The general point I was making is we should be wary of adopting a "better" technology without understanding the underlying problem that technology purports to solve. Doing so inevitably leads to solutions with a lot of incidental complexity that fail to even solve the underlying problems fully. For solutions like this, you'll often see advocates unable to really give a formal argument for why that solution is superior, instead falling back on pointing to particular examples, harping on how convenient certain things are, and gushing about various intangibles like how "beautiful" it is. There's nothing wrong with such advocacy, but if you cannot formalize your argument that one solution is better than another, chances are you do not fully understand the underlying problem and are therefore ignorant of whether some more direct, simpler solution exists.
Moving on, Chris Quenelle had this interesting comment, claiming that any form of explicit parallelism is unnecessary:
If your program is purely functional, the compiler can assign threads to whichever chunks of calculation it wants to. Hence you don't need actors. Or any other form of explicit parallelism. The purpose of explicit synchronization is to manage the timing of side-effects in the presence of parallelism.
I'm sympathetic to this point of view but I do think there needs to be something more. I've tinkered with the
Futuremonad, which is explicit in the sense that you have decide when you are writing code in that monad, but implicit in that it only requires that you indicate dependencies between computations, not specify how those computations are scheduled out to threads, how many threads are used, etc. But I believe this breaks down for distributed computation, where the topology of the concurrency is a static or semi-static structure that must be under the control of the programmer. I've also found the monadic style doesn't work so great for "pipeline" parallelism. But more on that in a later post.
Ulf Wiger had a pretty interesting comment in which he argued (I think, Ulf, correct me if I'm wrong) that the compositional style often leads to excessive dependencies between components. This is bad given that "the thing that often kills large projects is dependency management". I would buy the claim that large projects can be killed by poor dependency management, although of course projects can fail due to lots of other reasons, too. But I view this as completely separate from my general arguments about how side-effects kill composability. Even when programming purely functionally, nothing stops you from duplicating programming work to avoid relying on some shared (not yet completed) dependency. For instance, even if two modules could in principle both be implemented using some shared generic code, it might be worth not building this out if the communication overhead and additional dependency would create a bottleneck for the overall team. This is analogous to the situation that often arises in parallel algorithms, where one can improve runtimes by duplicating some work but running things in parallel. In any case, nothing about purely functional programming precludes you from duplicating work like this if it makes sense. But if you do decide you'd like to reuse and compose code, you have the means to do it.
Lastly, there was one commenter who disagreed with my claim that "intuitiveness" does not justify use of actors as a programming model: "The link to our intuition and hence ability to leverage prior experience is paramount in being able to comprehend complex systems."
This is sort of a loaded statement. Let me unpack it a bit. First, what is really meant by intuition here? When you learn something, anything, you develop an intuition for it which of course is helpful. But if this is the case, what does it really mean to say that one technology is more intuitive than another? Assuming you understand both technologies and have an intuition for both, is one more "intuitive" than the other? With respect to what? Perhaps that simply means you are more familiar with one than the other?
No, I suspect what is actually meant by 'intuitive' here is "analogous to the physical world". But is there really anything particularly magical about the intuition each of us has from our understanding of the physical world? The more you try to pin down the supposed benefits of this intuition, the more they seem to vanish. In programming, the inferences one can make by analogy to the physical world are either too vague and too unreliable to be useful, or so simple that nothing is gained by tying them back to a physical system. Intuitions can inspire you as you explore a design space, but actual programming requires much more precise thinking than the vague intuitions we all have; fuzzy models here are for the most part "worse than useless". We're better off building, understanding, and developing an intuition for some new, simpler, more precise abstraction.