That said, I think some of the explanations for FRP indicate what is in my opinion a conceptual error - the reification of time. It may be that I am misunderstanding what is being meant by FRP researchers and the following rant will seem wholly uncontroversial. But if not, or if what I say seems crazy, please leave comments! Just to clarify, this is not a post trashing FRP - I am simply arguing against a particular way of viewing FRP that I feel is unhelpful.
Reactive behaviors in FRP are often described in terms of the following "purely conceptual" model:
(Time -> a) -> (Time -> b). Time? Why on earth does time appear explicitly in the model? Values do not depend on time, they depend on other values. When we say that a value is time-dependent, we are just being sloppy, implicitly assuming the presence of some process that will provide our function (a function is a dependent value) with a series of possibly distinct inputs.
Thus, for example, if we are really trying to be precise, it is a conceptual error to say that "the stock price of Microsoft is time-dependent" (of course, I don't object to this as an informal statement). The stock price of Microsoft is dependent on various things - current interest rates, say, the set of competing companies' products, and who knows what else... the price of oil, whether a butterfly in Idaho is flapping wings, etc. Time is just a proxy for some process that will provide our Microsoft price computing function with a series of
(interest rate, competing products, oil price, ...)tuples. In principle, if we could clone our universe and "at a later time" supply this function with the same values, it would produce the same output! So why talk about time at all?
Reifying time, even in informal language about FRP or "loose conceptual models" is therefore confusing and misleading because it obscures the real story (of processes and their dependencies) behind an opaque and ill-defined notion. And if we attempt to reify time in the programming model we run into problems as soon as we attempt to make it less opaque - endowing time with any sort of operations yields unimplementable gobbledeygook, where functions can reach back into the past arbitrarily far (which suggests an omniscient runtime remembering every value that ever was), or look into the future, violating causality! (These problems are also discussed here by Conal) For these reasons I think it makes more sense to talk directly about processes and their dependencies, and it seems this is exactly what is done in modern FRP formulations! For instance, in this paper on causal commutative arrows, time is not even explicitly mentioned! One possible instantiation of their model is:
data SF a b = SF (a -> (b, SF a b)). Time is not mentioned at all; the paper deals with processes directly, which are the real story. Continuity is handled without any special consideration, as we can push updates through the dependency graph as often as we like, and "step size" is just like any other parameter propagating through the graph!
As an aside, you can tell there's problems when it is not possible to expose a model in its full power (and not just for efficiency - most
(Time -> a) -> (Time -> b)functions are actually ill-defined) and we must instead resort to a rather ad hoc set of combinators that expose only pieces. Another way I like to think of this situation is that the combinators chosen actually induce another model that may be quite different than the intended "conceptual model", and the conceptual model becomes a rather bad way of discussing what is actually happening. I distinguish between this and omitting possible operations in the conceptual model purely for efficiency reasons. (I wonder if there is a precise way of clarifying this distinction.)
Lastly, when I was initally thinking about FRP, it was in the context of distributed, reactive programs that could replace actors, which are stateful, and I came up with this type:
data Pipe a b = Pipe (a -> Future (b, Pipe a b)), where
Futureis the monad for concurrent evaluation (I couldn't believe it when I read the CCA paper and encountered the almost identical type,
SF!). But this could be generalized to
data Pipe m a b = Pipe (a -> m (b, Pipe a b)). With different choices of
m, we obtain different sorts of systems. When I showed this to my coworker Rúnar, he immediately suggested the list monad, implying nondeterminism. Along these lines, we can also imagine a probability monad, also implying nondeterminism in the system's evolution but something more akin to quantum physics. This would admit the possibility of a more interesting evaluator, one that culls out branches whose probability dips below some epsilon (or even lazily evaluates all possible branches, even low probability ones, but only when they are demanded). Interestingly, with
mas an arbitrary
Applicative(but not a monad), I believe we obtain the "commutativity" law suggested in the CCA paper, which basically enforces that processes, once split, cannot interfere with one another.