Friday, July 30, 2010

Reification of time in FRP is a conceptual error

I'm finally getting around to learning more about functional reactive programming and it is an absolutely fantastic field. FRP is a huge, important idea in programming... the missing link connecting FP to all the "inherently stateful" tasks that supposedly necessitate imperative programming (things like simulations, GUIs, etc). I'm quite excited to see how this technology evolves.

That said, I think some of the explanations for FRP indicate what is in my opinion a conceptual error - the reification of time. It may be that I am misunderstanding what is being meant by FRP researchers and the following rant will seem wholly uncontroversial. But if not, or if what I say seems crazy, please leave comments! Just to clarify, this is not a post trashing FRP - I am simply arguing against a particular way of viewing FRP that I feel is unhelpful.

Reactive behaviors in FRP are often described in terms of the following "purely conceptual" model: (Time -> a) -> (Time -> b). Time? Why on earth does time appear explicitly in the model? Values do not depend on time, they depend on other values. When we say that a value is time-dependent, we are just being sloppy, implicitly assuming the presence of some process that will provide our function (a function is a dependent value) with a series of possibly distinct inputs.

Thus, for example, if we are really trying to be precise, it is a conceptual error to say that "the stock price of Microsoft is time-dependent" (of course, I don't object to this as an informal statement). The stock price of Microsoft is dependent on various things - current interest rates, say, the set of competing companies' products, and who knows what else... the price of oil, whether a butterfly in Idaho is flapping wings, etc. Time is just a proxy for some process that will provide our Microsoft price computing function with a series of (interest rate, competing products, oil price, ...) tuples. In principle, if we could clone our universe and "at a later time" supply this function with the same values, it would produce the same output! So why talk about time at all?

Reifying time, even in informal language about FRP or "loose conceptual models" is therefore confusing and misleading because it obscures the real story (of processes and their dependencies) behind an opaque and ill-defined notion. And if we attempt to reify time in the programming model we run into problems as soon as we attempt to make it less opaque - endowing time with any sort of operations yields unimplementable gobbledeygook, where functions can reach back into the past arbitrarily far (which suggests an omniscient runtime remembering every value that ever was), or look into the future, violating causality! (These problems are also discussed here by Conal) For these reasons I think it makes more sense to talk directly about processes and their dependencies, and it seems this is exactly what is done in modern FRP formulations! For instance, in this paper on causal commutative arrows, time is not even explicitly mentioned! One possible instantiation of their model is: data SF a b = SF (a -> (b, SF a b)). Time is not mentioned at all; the paper deals with processes directly, which are the real story. Continuity is handled without any special consideration, as we can push updates through the dependency graph as often as we like, and "step size" is just like any other parameter propagating through the graph!

As an aside, you can tell there's problems when it is not possible to expose a model in its full power (and not just for efficiency - most (Time -> a) -> (Time -> b) functions are actually ill-defined) and we must instead resort to a rather ad hoc set of combinators that expose only pieces. Another way I like to think of this situation is that the combinators chosen actually induce another model that may be quite different than the intended "conceptual model", and the conceptual model becomes a rather bad way of discussing what is actually happening. I distinguish between this and omitting possible operations in the conceptual model purely for efficiency reasons. (I wonder if there is a precise way of clarifying this distinction.)

Lastly, when I was initally thinking about FRP, it was in the context of distributed, reactive programs that could replace actors, which are stateful, and I came up with this type: data Pipe a b = Pipe (a -> Future (b, Pipe a b)), where Future is the monad for concurrent evaluation (I couldn't believe it when I read the CCA paper and encountered the almost identical type, SF!). But this could be generalized to data Pipe m a b = Pipe (a -> m (b, Pipe a b)). With different choices of m, we obtain different sorts of systems. When I showed this to my coworker RĂșnar, he immediately suggested the list monad, implying nondeterminism. Along these lines, we can also imagine a probability monad, also implying nondeterminism in the system's evolution but something more akin to quantum physics. This would admit the possibility of a more interesting evaluator, one that culls out branches whose probability dips below some epsilon (or even lazily evaluates all possible branches, even low probability ones, but only when they are demanded). Interestingly, with m as an arbitrary Applicative (but not a monad), I believe we obtain the "commutativity" law suggested in the CCA paper, which basically enforces that processes, once split, cannot interfere with one another.

33 comments:

Joe said...

Question (coming from a position of near-complete ignorance about FRP): Is the reification of time in FRP really meant to mean that time is really an input to the function, or is it meant as an abstraction to save programmers from having to specify all of the "real" inputs to the function?

I can understand how making things functions of time could lead to sloppy and/or unnecessarily specific code. However, I can also see where specifying all of the real inputs to a function places an unreasonable burden on the programmer. Certainly, to use your stock price example, the set of inputs to a function that produces Microsoft's stock price is unknowable and possibly infinite. Even in simpler cases where it's possible to enumerate all of the inputs to a function, I don't think that it would always be worthwhile to require programmers to enumerate all of the inputs to something that can be adequately modeled as a function of time, would it?

Saizan said...

it seems Time is mostly used to abstract away actions of other systems on some external resource.
Since if you want to have some value whose meaning is "the current content of the file "foo"" you'd have to make it a function of every process that can write to "foo", which is not feasible so we just say that "foo" changes over time and it's actually quite simple to imagine a polling process that fetches the current value from the environment.

Another way would be to not care about "foo" in particular, but just take the content of some file as a parameter, but then there'd have to be some imperative shell that opens the file and pushed it down into the process.

I'm not sure if exploiting linux's inotify would change anything for this example, semantically speaking.

wren said...

I can't speak for FRPers, but I think you may be misinterpreting the model. Certainly the stock price of Microsoft *is* time-dependent. But "dependent" should be read in the same manner as in dependent probabilities; that is, "given the time, we will know what the value is". The model doesn't mean that time is the *causal* agent which defines the value. The model only means that, provided we know the time, then we will also know the value (e.g., by looking at historical records, or by waiting until that time arrives). Dependence just means that the values vary as we vary over the values of some other type. Perhaps a better way of phrasing the model would be,

(a | Time) -> (b | Time)

Where we use the "|" to mean "given", just like in probability theory. This notation is telling, because what we really mean is that the types, a, b, etc have a *distribution* over Time. In other words, the (x|y) notation could be implemented as [(x,y)] with the constraint that all the values of y are distinct. But then, if we wanted to make that constraint explicit, the type is (InfiniteMap y x). So long as we never desire to do reverse lookup and figure out the times when a particular value shows up, then this is identical to (y->x).

The conceptual model of viewing mutability as a function from time to values is exactly what is so ingenious about FRP. The reason simulators, games, and the like have been held up as "proof" that we need side effects (and global state) is because the conceptual model of imperative languages is exactly what you say: functions from a tuple of all sorts of crap. By abstracting over those variables and calling them "time" we do many great things: We simplify the model because we don't need to track dependencies explicitly. We improve the model, because we remove the possibility of passing in stale versions of values which leads to an incoherent tuple of state. And finally we remove the idea of side-effects from the model, thereby making it amenable to pure functional implementation.

Pure functional versions of dependency tracking, causality tracking, and declarative truth maintenance are also possible, and beautiful, but they are backed by a different conceptual model. In particular, they remove time completely which means they have a model where we're constantly racing to the present (i.e., the time is always present, so we're always pushing changes until things settle down). In a sense, this is the dual perspective, similar to how monads capture structured values but comonads capture values in context. Perhaps it's better for writing games, perhaps not; either way, the original FRP model is interesting to study in its own right. The inaccuracy of the FRP model could be solved with dependent types by using something like this,

(t : Time) -> (d : Delta Time) -> ((t' : t-d..t) -> (a | Time=t')) -> (b | Time=t)

This way we are ensured that the result cannot look into the future, and also that it only holds onto a finite amount of the past. And with dependent types, we could choose to interpret the "|" as a weak sum, so all the values are pairs of a value and some kind of proof that the signal has that value at the given time. This is a much better version of the time-based FRP model, but dependent types are tricksy.

huntse said...

Microsoft's share price is pretty much as poor an example as is possible to choose in that it is very directly a function of time in several respects. Firstly, all stocks are subject to exchange trading phases (open, close, auctions etc) which are directly related to the time of day. If you subscribe to market data during the close period, you will see the price is really doing very different things at that time.

Secondly, there are numerous well-documented seasonality effects which affect stock prices, with the price process acting differently at different times of day or week, or on specific days of the yes (fed or earnings days or NFP day for instance).

Patai Gergely said...

Reifying time, even in informal language about FRP or "loose conceptual models" is therefore confusing and misleading because it obscures the real story (of processes and their dependencies)

Using the same argument, you could say that declaring some result to be of type Integer is confusing, since it obscures the real story about arriving at that integer. But that’s exactly what we love about functional programming: referential transparency everywhere.

As wren said, you can think of abstractions of reactive values as possibly infinite sets of time-value pairs. The desired outcome is such a set, and we are given a combinator kit to define it, all the while preserving compositionality.

Heinrich Apfelmus said...

What wren said. "Time-dependence" is not intended to be read in the sense of causal dependence. Rather, it simply means that the value in question is modeled as a function Time -> a ("the result of a function depends on its argument").

This approach is common and successful in physics. Clearly, the position of a ball depends on how I threw it; but nevertheless it is modeled as a function Time -> R.

Alessandro said...

Nice post, its good to see another point of view about "Reification of time".

I agree with the comments, time is an abstraction in any model you see.

Without time its very hard to argue about complex series of events.

Sometimes we need time. especially in measurements:
"How many mseconds will it take to run that serie of functions?".

But i agree that in FRP it may be better to be carefull when using time abstractions.

Paul Chiusano said...

Wow, some great comments. :) What I am stuck on is that ultimately, for programs to be executable, the dependencies between processes need to be made explicit - there is no way around this (excepting cases where we happen to have a closed form solution for the system's evolution). So it then comes down to whether we have the programmer construct this graph explicitly, or whether the framework or runtime somehow infers it. Arrowized FRP makes construction of this graph explicit. I don't really understand how other approaches could work - it seems like you still end up with essentially the same thing, since to make inference of the graph tractable you end up having to restrict exposure to the model to a small set of ad hoc combinators which are, essentially, dataflow graph constructing combinators. But perhaps it just my lack of imagination and knowledge of FRP and nothing fundamental... :)

Paul Chiusano said...

@wren - I like your explanation of the model in terms of conditional probabilities. I'm intrigued by your comparison about the "always chasing the present" view and traditional FRP as being analogous to monad vs comonad... can you explain a little more what you mean?

@huntse - I still don't think microsoft's price depends on the time. The 'time' is just a proxy for a set of other values, the current position of the earth relative to the sun, its rotation, etc. Of course, it probably not worth modeling all that - instead it is probably better to cut the dataflow graph off at that point and instead have something like what @Saizan mentioned - an imperative shell that pushes (Microsoft price, date) pairs into the root of the dataflow graph (this is a bit like what is done with Iteratees - an imperative Enumerator pushes values into the Iteratee). But the key thing is that there isn't anything special about these values, they are just like any other values. Even "time" is not special.

@Patai - hmm, I think you are twisting my point. Of course I don't object to giving names to things like Integer. But Integer forms a new semantic level - it comes with an algebra that I can reason about - I don't need to be aware of what's happening inside the black boxes. Time does not have this property, IMO.

@Heinrich - in physics, if you actually had to simulate the ball and didn't have a closed form solution wrt time, you would have to explicitly model the REAL dependencies. :) I don't think there is any way around that.

Anonymous said...

Isn't time important for animations?

wren said...

As I mentioned before, I can't speak for the FRPers since I've only followed them from a distance. However, my masters thesis was designing and working on a logic programming language (Dyna2) that uses the notion of "declarative truth maintenance" in order to retain purity despite an external process that can change the facts dynamically.

The traditional FRP perspective where we take time-dependent values as our primitive type and provide combinators for manipulating them is extremely reminiscent of monads. Monads encapsulate a design pattern of structured values. For example, lists contain not just one value, but a collection of values structured in a particular way, and we preserve that structure through the use of the monadic primitives. But fundamentally, monads take a view from outside of the structure. We don't know the structure we're preserving, we just know a few primitive ways of manipulating this contraption we're given.

The alternative is to adopt a perspective from within the structure, and looking out. That is, programs and values embedded within some larger context. Programs in context can manipulate vanilla values just like normal, but they also have a set of primitives for being able to move around their context. These operations for adjusting the context are just as primitive and inscrutable as the monadic ones for adjusting structure. This alternative perspective is essentially the one provided by comonads.

In Dyna2, we are clearly running a program in context, namely the context of the external program that can alter our facts. We don't happen to have any primitives for altering the context within Dyna, but we are able to notice when the context changes. However, from the perspective of Dyna it is not so much that we say "hey, my environment changed!" and then make changes. Instead, we are informed that certain things are out of date and we add it to our work queue, just like we add things when we've derived a new fact and need to determine whether inference rules would allow us to derive something new from it. That is, from the perspective of Dyna itself, "the truth" is something that is considered stable and unchanging; it's just that we're forever running trying to reach it and maintain it.

The dichotomy between traditional FRP and the atemporal version seems very similar to this. That is, in FRP we have structured objects that we pass around and manipulate from the outside. And because we are, in a sense, "outside of time" we need for the structures themselves to embody the notion of time. Whereas, from the atemporal perspective our programs are existing within those structures and looking out; because we are deeply embedded within time, we do not notice that we ourselves are changing in time, instead we only see the world changing around us. We are always chasing the present just like Dyna is always chasing the truth.

Just like monads have a notion of being "in the monad" and then "running the monad", comonads have the same. Inside of Dyna we have a certain perspective, but Dyna would be uninteresting were not for that external program that runs it. In a similar way, CCA is uninteresting without some sort of outside causative agent to drive things; sui causa non est. Ultimately, the 'atemporal' perspective must still have some notion of time, it's just that time gets moved from the "inside" half of things to the "outside" half. Thus, from the inside we take a relativistic rather than absolute view of time. In the general field of modeling and simulation, the choice between objectivity and subjectivity isn't one that has an absolute answer. They each have different strengths and weaknesses, so the choice depends on what it is you're modeling.

wren said...

Also, your blogger seems broken. It doesn't accept OpenID (error code bX-ywtyjz), and when using Google accounts it gives an error page, but then posts the comment anyways.

Yitz said...

It has been suggested that the type should really be something like:


frp :: Ord t => (t -> a) -> (t -> b)


Then we can still state requirements like: "The value of frp f x must depend only on f y for y < x" without reifying time.


Does that help?

sclv said...

Here's another take: A decent FRP should be able to implement simple physics easily. Implementing a simple model of physics necessarily means introducing the relationship of acceleration and position and velocity, etc. This means integrals and derivatives with respect to, at least, time.

So if you think that some form of integration and derivation should be, if not primitive, then at least trivally constructed from primitive operations, then time becomes something special.

Conal said...

Hi Paul,

Based on your post and your replies, I do think you're reading something into "depends on time" and "function of time", adding some causal & operational associations that I never intended. Additional phrasings are "time-varying" or simply "varying". With these phrases, I'm trying to convey exactly the same meaning, which is exactly the simple notion of "function" in math. No notion of "because" (causal) or "how" (operational). Nothing about algorithms, code, formulas (closed-form or otherwise).

Does this clarification help at all?

Over the years I've noticed some people have the sort of discomfort you've expressed here. I think mainly they've overlaid operational thinking onto understanding what FRP is, i.e., mixing together how FRP programs might execute on a sequential computer with what FRP programs mean, independent of execution.

In your Pipe types and the CCA paper's SF type, I see operational models. Nice tools for thinking about how, but not at all appealing to me for defining what. I always want simple, precise & compelling denotation first. And then consider many possible operational approaches for correctly implementing the denotation.

Heinrich Apfelmus said...

@Paul

Nope, you don't have to model the "actual dependencies" (it's not even clear what that is).

For instance, take a ray of light. The principle of least action is an equivalent formulation of its laws of motion and states that between two points, the path taken by the light ray is the one that has minimal length. Fixing the start and endpoint first and then looking at the movement in between is wholly counterintuitive in terms of "dependencies", but it's an equally valid description.

In short, "time dependence", which means "time varying", is just a model, and it neatly separates out the business of describing what motion is and calculating it (knowledge of the dependencies can help with the latter, but does not need to).

Paul Chiusano said...

Once again I am blown away by the responses! :)

@wren, thanks for your long, thoughtful replies. Your last post really clarified the distinction between the two points of view... one is not more valid than the other. I think of them as the "push" and "pull" views. The "pull" view is more analogous to conceptual model I complained about, the push view is akin to arrowized FRP. I can see how a "pull" view could work, but it seems to require more of a runtime to avoid space leaks, etc. But I would definitely not rule it out.

@conal - Yes, I follow. I guess what I am saying is, time is rather opaque... so to have something just be a function of time might not be the most meaningful perspective... it says nothing about what is causing the evolution wrt time. It's fine if you just don't want to model this - in many cases you don't want to model things explicitly, they are just black boxes... but in a programming system, one that can actually be executed, these dependencies will need to be discovered somehow. Is this purely an implementation concern? I am curious what you think about CCAs... my interpretation is that it does not really make sense with CCAs to even talk about what they mean. CCAs satisfy the laws given, beyond that no meaning is assigned. The SF type just happens to be one type (but not the only) which satisfies the laws.

Sorry about the problems posting comments - it looks like google has been having some issues lately:
http://www.google.com/support/forum/p/blogger/thread?tid=3b68bd1995999497&hl=en

Conal said...

> to have something just be a function of time might not be the most meaningful perspective... it says nothing about what is causing the evolution wrt time.

Right, and wonderfully so! The semantic model for behavior says nothing about what causes a particular behavior to be what it is nor what computerish mechanisms get involved in implementation behavior (e.g., "processes and their dependencies", to borrow from your post). Similarly, the semantic model we call "numbers" says nothing about how particular numbers come to be. Consider the numeric program "seven = 3 + 4". You might object that the model of numbers fails to explain the "causal" relationship between 3, 4, and 7, and that "it obscures the real story (...) behind an opaque and ill-defined notion", omitting operational concerns like order of evaluation, binary representations, little- vs big-endianness, etc. And that a mathematical (rather than computer-oriented) notion of numbers involves "unimplementable gobbledeygook".

> in many cases you don't want to model things explicitly, they are just black boxes... but in a programming system, one that can actually be executed, these dependencies will need to be discovered somehow. Is this purely an implementation concern?

Yes. Purely an implementation concern.

I like to define a simple & precise denotational model for each abstraction, completely separate from implementation concerns. This model gives a compelling & tractable basis for (a) clear & correct reasoning about uses of the abstraction, and (b) correct and efficient implementations of the abstraction.

> I am curious what you think about CCAs... my interpretation is that it does not really make sense with CCAs to even talk about what they mean. CCAs satisfy the laws given, beyond that no meaning is assigned.

Agreed. CCA is like Monoid, Group, and Monad: a signature and set of laws, and nothing else. And CCA is not FRP, just as Applicative is not FRP. (And Group is not number.) Rather, CCA and Applicative are two type classes among many that can help to organize FRP. Which leaves the question: what is FRP? For me, FRP is exactly the denotational model. More accurately, there are a few denotational models and hence a few FRPs. Type classs are too little to capture FRP (as you & I have just pointed out with CCA and Applicative). Implementation notions like "dependency" and "process" are too much to capture FRP, since they impose non-essential constraints and complications.

So CCA is not an alternative to denotational thinking & precision. It's a delightful & useful algebraic packaging that fits with the denotational model(s) I think of as being FRP.

Putting on my implementor hat, I next get interested in how I might get a FRP (i.e., a simple & precise denotational model) to execute on a machine.

Paul Chiusano said...

@conal - There is something I am uncomfortable with here.. but I can't quite put my finger on it... and perhaps I am misunderstanding what you are saying when you talk about finding a good denotational model first. :) But there seems to be some arbitrariness in what you assign to be "the" meaning of a functional reactive program (or any program). We can place the expressions of our program in correspondence with many different mathematical objects... why choose one object over another? Why functions (T -> a) -> (T -> b) instead of dataflow graphs, say? It seems the only answer is that some objects aid in thought more than others... but why is this the case?

And why is this exercise so important in the first place? Perhaps this is more of a general difference in how one looks at these things... isn't it the algebra - the operations and the laws that govern them that aid in reasoning? The algebra is what lets you prove things about your programs... the denotation itself doesn't seem to help here... or, rather, it helps only insofar as it maps the expression to some object with a meaningful/useful algebra (but if that is the case, why deal with the objects at all? Why not just deal directly with the algebra?). I think of this as a category-theoryish perspective... we shouldn't care what the expressions ARE internally or what they mean, we care how they relate to other expressions - i.e, the algebra. You think of the CCA algebra as being secondary... something you can use to implement a denotational model. But another way of looking at it is that the algebra is primary, and our denotational model is secondary, just one of many models satisfying the CCA laws...

Okay, I am done rambling. :)

Ben Hutchison said...

I agree with the main thrust of your argument. I have been troubled by the "reification of time" concept in FRP myself, and feel that this wrong-turn may have limited the growth of FRP approaches.

I think FRP should seek to declaratively relate the change in state to the current state, along the lines of autonomous dynamical systems:

d( x(t) )/dt = f( x(t) )

Conal said...

> We can place the expressions of our program in correspondence with many different mathematical objects... why choose one object over another?

I have a few criteria I use to answer this question:

* Mathematical precision. Base requirement.

* Adequacy. Must cover the whole programming interface and explain properties, such as nontrivial equivalences.

* Simplicity. Relative measure, i.e., simpler is better.

See also Luke Palmer's post Semantic Design.

Of critical importance is to measure simplicity after the first two criteria are satisfied. Don't measure simplicity of a hand-wavy notion, and do not assume that your hand-waving-based guess about simplicity is at all close to the measure after you've replaced fuzzy thinking with precise thinking.

"Everything is vague to a degree you do not realize till you have tried to make it precise." - Bertrand Russell


> Why functions (T -> a) -> (T -> b) instead of dataflow graphs, say?

Apply my three criteria above. First we'd have to make "dataflow graphs" precise. If we give a "graph" answer (set of nodes & edges), then we won't get nontrivial equivalences. Once you accomplish precision & adequacy, then we can examine simplicity.

I like about (T -> a) -> (T -> b) is that it simply says: time-varying output that depends on time-varying input.

> isn't it the algebra - the operations and the laws that govern them that aid in reasoning? [...]

Does the algebra completely specify the semantics? If so, it's not a terribly useful algebra (describing only one thing). If not, how do you know what to implement?

On the other hand, a denotational model tells you exactly what to implement, without constraining how you implement it. And it gives you a basis for proving algebraic properties. Fitting into patterns defined by algebraic concepts (Monoid, Applicative, Arrow, etc) gives helpful design feedback. It guides the design toward universals and away from the arbitrary. See Denotational design with type class morphisms, especially footnote 1 and the John Reynolds quote in the conclusion section.

You can also read a related discussion on an LTU thread.

> You think of the CCA algebra as being secondary... something you can use to implement a denotational model.

Not to implement a denotational model, but rather to structure its interface and to give feedback on its elegance. See comments above.

wren said...

@paul re @conal: Part of the reason it's an interesting question is because the question is not "how do I write games in a pure language?" but rather "what does it mean to write games purely?". Essential to the latter question is that funny word means.

You're positing that it is unnecessary to find a model, so long as we can find an algebra. Whether this is indeed adequate is related to a long argument between semanticists and proof theoreticians. That is, the question of whether a proof theoretic system should count as a "semantics" or whether only model theoretic systems can. This in turn is related to the question of what it means for an algebraic system to be "sound" or "complete". Generally, these are phrased as an algebraic system being sound and complete with respect to some semantic model. I won't raise my opinions on this debate, since they're not relevant here, but you should be aware that your question is perhaps deeper than you intended.

In line with a comment dons made over on reddit, the interesting thing about FRP is not that it offers a solution to how to write games etc in Haskell. Certainly it's interesting if FRP does enable us to do that, and certainly we would like for it to, but it's not (so far as I understand) the true goal. The true goal is to figure out an answer to the question of meaning: what is it that games etc mean, why have their semantics eluded us for so long, and is it possible to capture those semantics in a pure system. All of these are very interesting questions, which is why FRP is an interesting area for research.

dlandauer said...

Typo: causal is written as "casual" in the anchor text of your link to the CCA paper.

Paul Chiusano said...

@dlandauer - fixed, thanks :)

Paul Chiusano said...

@conal and @wren - Yes, I think this discussion is deeper than I'd realized. :) I think of there as being two perspectives: On the one hand, we start with the algebra, and figure out what operations and primitives we want to be able to express and what relationships we expect to hold among the expressions of this algebra. In the other approach, we start with what we want our expressions to mean, then work from there to determine an algebra.

Are these two really that different though? One way to view the meaning-first-approach is that you are mapping your expressions to some other mathematical object with a well-known algebra, so it's still "algebra first". On the other hand, with the algebra-first-approach, you might argue that you are at least vaguely thinking of some possible denotation for your expressions, and this guides the choice of operators and such, so it's still "meaning-first"!

I think I'd be more comfortable with (T -> a) -> (T -> b) as a model if T were actually something well-defined. As it stands, it has no algebra, and so this model tells me nothing about the sorts of things that are or should be expressible in this model. That question will be answered by the set of combinators we end up exposing (i.e, the algebra), and for me this is where the real work is. For instance, if T is opaque, with no defined operations - like in forall t . (t -> a) -> (t -> b), then it is impossible to construct time varying functions at all! If t is a ring, it permits functions that can reach into the past or look into the future (by that I mean, there are conceivable functions of that type).

Conal, I suspect if the semantic domain were something with an obvious algebra, we would not be disagreeing much since we'd have worked from different directions but met in the middle. :)

By the way, great discussion guys. :)

Conal said...

Hi Paul:

From your latest remarks I can't tell whether you understood or even read my previous response. I'd like to hear your responses to the answers I offered there: denotational design criteria, dataflow graphs, (in)completeness (non-specificity) of algebras, references to my TCM paper an an LTU thread, etc. And if you don't understand those answers, please ask for clarification.

Also, I'm puzzled about how you could not see T as well-defined. I expect that every single paper that gives a denotational semantics for FRP (whether classic or arrow) answers that question, and there are many such papers. A hand-waving explanation could fail to define T, and an algebraic approach (as you advocate) could also fail to define T. A denotational will answer your questions. Which illustrates one of the points I made in my previous response about why denotational.

Paul Chiusano said...

@conal - I did check out those references. I had seen your TCM paper before and also Luke Palmer's semantic design post, but I went back and took a closer look in light of our current discussion. The discussion of your TCM paper on LtU was actually quite helpful - it seems like I am saying something similar to Matt M: http://lambda-the-ultimate.org/node/3215#comment-47263 but it is hard to tell for sure.

Your criteria for why you'd choose one mathematical object over another for your denotation seem reasonable, though quite subjective (not a bad thing necessarily).

As for why I see T as being ill-defined... I find this hard to explain without just repeating myself... ill-defined is probably not the best way to say it - it is more that T by itself in the (T -> a) -> (T -> b) model does not really nail down what sorts of programs should be expressible without making further assumptions. At this point, I would like to read up more about the distinction between the different types of semantics (denotational, algebraic, ...) since that seems to be the heart of where we might differ and I feel like I am handwaving - if you can recommend any good resources let me know (Derek Elkins on LtU pointed to this book: http://www.cs.uiowa.edu/~slonnegr/plf/Book/). And I'd like to just generally collect my thoughts and perhaps write them up, using a less contentious example than FRP. Then maybe we can continue the discussion. :)

Thanks for the comments.

flippac said...

Part of the point of FRP is to express things that vary continuously over time - not just a series of sampled frames showing where a ball is after it's thrown every 1/60th of a second but the entire curve. How can you do this without the notion of time being a significant part of the system? Many values can and will depend directly on time or an integral with respect to it.

If all you need is a discrete event system then yes, FRP is more than you need.

Ben Hutchison said...

flippac,
in most simulations:

S' = f(S, dt)

where S' is a state subsequent to state S, and dt is the time step between them.

While dt appears here, t (ie time) does not. Ultimately, /subsequent states/ can be computed from /previous states/. The difference between a continuous or discrete simulation is whether dt is fixed or variable.

Conal said...

Ben: I agree with what you say about the way things are implemented "in most simulations". And I see that standard way as an unfortunate low-levelness of those implementations. That is, most simulations are non-modular in this way: the specification of the continuous behavior is not separated out from the implementation of numerical solution/approximation methods for those specifications. Rather, these two conceptually separable aspects are enmeshed.

In a high-level, modular simulation program, I expect we'd see a lot more of continuous time, and I don't think we would ever see "dt" or "previous state" in the specifications of behavior.

Anonymous said...

FRP time modeling is not right or wrong, it is the way it is. And for each way you choose to model somthing, there will be the pros and cons. Time is continuous and monoticaly increasing, and would seem to be the reason for the FRP modeling. Yet if you view the world in some "wavelet" like decomposition where the world is a structured combination of time and space characteristics, then the FRP time model probably meets its limits. Moving to a quantum world is worse.

Paul Chiusano said...

@Anon - Can you elaborate on "Yet if you view the world in some "wavelet" like decomposition where the world is a structured combination of time and space characteristics, then the FRP time model probably meets its limits. Moving to a quantum world is worse." It sounds very interesting but I'm not really sure what you're talking about. :)

equational said...

(I am the previous Anon, great blogs by the way)

One important thing to realize is that a model exists only within its interpretation. The "interpretation" of FRP models assumes that some variables can be piecewise continuous, changing with time. It also assumes that discrete events can happen and that these events possibly cause discontinuities to those variables (that is why they are piecewise continuous).
When you consider the model as code, you can bring in type theory and do nice things; FRP demonstrates that well.
When you consider the model as mathematical properties, you can bring in analysis and also do nice things. For example, it might be easier to model a system in frequency domain. In frequency domain, time disappears and making time reification is less obvious.
I mentioned wavelets, because time “partially disappears”: the wavelet constructs hierarchy of time/frequency Cartesian products. In such a context, time is no longer continuous, but discretized in a hierarchy. Your model could express wavelet properties, and deal with events (which would need to be aligned with the time discretization/frequency domain. But my feeling is that the time reification would meet its limits in such a case.