## Flow

A place to discuss the science of computers and programs, from algorithms to computability.

Formal proofs preferred.

Moderators: phlip, Moderators General, Prelates

ciferkey
Posts: 9
Joined: Sun Dec 20, 2009 4:53 am UTC
Contact:

### Flow

So I've been a reader here for a while, but I've never really discusses much of anything. I more of just appreciate all of the thought provoking discussing. Anyways I had been wondering if anybody had looked into
Luke Hutchison's work on developing a new programming language (http://www.flowlang.net/p/flow-manifesto.html) and what they thought about it.
Hydrogen is a light, odorless gas, which, given enough time, turns into people

Yakk
Poster with most posts but no title.
Posts: 11129
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

### Re: Flow

I've read a few dozen lines. It seems interesting, but I'm curious about Turing completeness.

And, in more generality, I'm curious how ... strong ... a program can be written without a near-infinite compile time.

Aha: Making Flow Turing-complete.
Condition 3 is the only condition that is not supported by the description of Flow so far as a fixed-topology lattice. The way Flow implements the third condition is to allow fixed-topology lattices to be applied to each element of a (possibly infinite) list, and to allow the value of the (n+1)th element to be set as the output of the lattice that depends on the (n)th element. Computation on the (n+1)th element blocks until the result has been computed from the (n)th element, so this enforces a strict ordering between of the two lattices. The program terminates automatically when there are no more elements in the list, i.e. when the last iteration does not write an (n+1)th element to the end of the list. This may sound a little arcane, but all sorts of more common control flow constructs can be mapped to this model.

Interesting.

That still leaves the problem of building up "complex, large" programs easily. The size of programs written in good old imperative languages is ridiculous...
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

### Re: Flow

It also sounds like it would be very tempting for people to just program imperatively using a single potentially infinite list, defeating the whole point.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

Yakk
Poster with most posts but no title.
Posts: 11129
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

### Re: Flow

I like the idea of timestamp, but I'd rather think of it is a "context" or "thread" or "string" or "sequence". Sequence is possibly the least overloaded.

Variables are immutable.
Identifiers + Sequences are mutable.

When you write to an Identifier with a Sequence, you get a timestamped variable.

Sequences are not timestamps, because they are only partially ordered.

If there is a write to id@a, no write to id@b for a<b<c, and a read from id@c, then id@c depends on id@a.

Basically, I could see "carrying around" your sequence, forking it, creating sub-sequences, etc... while you could write this in flow, that isn't where they seem to be going?
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

Osha
Posts: 727
Joined: Wed Dec 03, 2008 3:24 am UTC
Location: Boise, Idaho, USA

### Re: Flow

I really don't know enough to properly judge this one way or another, but... that won't stop me ;D
Hopefully I don't say anything toooo silly

It seemed to invoke a lot of buzzwords and allusions to grand concepts but without much substance to back it up.

In fact the big-Oh complexity of the execution time of a fixed program lattice is quite easy to determine directly given certain input datasizes
This big-Oh complexity can be a function of architectural parameters and datasizes, changing the parallelization strategy dynamically at various data size thresholds so that the most optimal strategy is always chosen regardless of the input data.

I remain very unconvinced on this part and it goes against my intuition. Is it just obvious enough that they didn't bother really explaining it or something?

The Flow programming paradigm requires the data dependency graph of a program to be directly determinable from its source code, which is directly equivalent to eliminating the concept of a variable's "current value". The resulting dependency graph can be viewed as a lattice of morphisms, as defined in category theory.

How is this at all different than pure functional code in an existing language like haskell? I'd really like to see more (in an explicit section) on the similarities and differences between flow based languages and some modern functional languages.

I'm also really perturbed by the lack of psuedocode. Even small things like GCD or a basic prime seive would be illustrative, and give it a bit of substance in my eyes.

(Style guideline: include the keyword "flowlang" somewhere on any web pages that talk about Flow, so that search engines can find information specifically about the language.)
flowlang 8D

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

### Re: Flow

Osha wrote:How is this at all different than pure functional code in an existing language like haskell?

I don't even see how it is different from a classical imperative program after it has been broken into SSA form.

I'm not sure whether the author wants to forbid programs to access global state. If you want to forbid programs to access any global state you could as well code in haskell. If you don't you will always run into concurrency related issues. (eg. determine whether a given piece of memory will be used again in the future)

Also:
The way Flow implements the third condition is to allow fixed-topology lattices to be applied to each element of a (possibly infinite) list, and to allow the value of the (n+1)th element to be set as the output of the lattice that depends on the (n)th element. Computation on the (n+1)th element blocks until the result has been computed from the (n)th element, so this enforces a strict ordering between of the two lattices

This will be a major bottleneck. Why should i want the program to block while iterating over a list? Iterating over a (immutable) list is one of the tasks that can be parallelized very easily? I also don't get how the list we're iterating over can be changed during iteration as everything is immutable?

Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Contact:

### Re: Flow

korona wrote:
The way Flow implements the third condition is to allow fixed-topology lattices to be applied to each element of a (possibly infinite) list, and to allow the value of the (n+1)th element to be set as the output of the lattice that depends on the (n)th element. Computation on the (n+1)th element blocks until the result has been computed from the (n)th element, so this enforces a strict ordering between of the two lattices

This will be a major bottleneck. Why should i want the program to block while iterating over a list? Iterating over a (immutable) list is one of the tasks that can be parallelized very easily? I also don't get how the list we're iterating over can be changed during iteration as everything is immutable?

I'm basing this only off of what I've read on the site, but I think you've greatly misunderstood what's being talked about here.

Yes, mapping over a list is normally very parallelizable. He's talking about explicitly making a list where each element depends on the previous, though, so you lose that property and have to do it serially (which is precisely what you want here, as it simulates state). In other words, there's a strong distinction between mapping over a list and iterating over a list - the former can be split up into parallel, the latter is serial.

Similarly, the list isn't being changed during iteration. The list's value is instead being computed as you go along.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

### Re: Flow

I do understand how the operation should be used to simulate state. It understood the operation as:

Code: Select all

`// this is ssa form pseudo code// let L_0 be the initial listrepeat {   const List L = phi(L_0, L')   const int i = phi(0, i')   if not exists L[i]       break   L' = L union some_function(L[i])   i' = i + 1}`

as indicated by the description:
The way Flow implements the third condition is to allow fixed-topology lattices to be applied to each element of a (possibly infinite) list, and to allow the value of the (n+1)th element to be set as the output of the lattice that depends on the (n)th element. Computation on the (n+1)th element blocks until the result has been computed from the (n)th element, so this enforces a strict ordering between of the two lattices. The program terminates automatically when there are no more elements in the list, i.e. when the last iteration does not write an (n+1)th element to the end of the list.

I don't think that this operation is in any way better than a simple "Iteratively execute function until result = 0". It seems to me that it is an artificial abstraction that does not even enhance parallel performance.

WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

### Re: Flow

It doesn't. It exists because without it, Flow might not actually be Turing Complete.

All in all, it sounds really cumbersome in order to get assurances that you can obtain just as easily from proper application of 'par' in Haskell.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

undecim
Posts: 289
Joined: Tue Jan 19, 2010 7:09 pm UTC
Contact:

### Re: Flow

WarDaft wrote:It doesn't. It exists because without it, Flow might not actually be Turing Complete.

not "might not actually be". It isn't, because without it, the halting problem is decidable for Flow. In no Turing complete system is the halting problem decidable.
Blue, blue, blue

WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

### Re: Flow

I expected that to be the case, but I don't know anything about the language (and didn't read the whole site) so I didn't want to make that strong a statement. As there's only one person working on it, it's also possible that the language has some subtle quirk(s) in it that allow it to be TC in ways unbeknown to them.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

Yakk
Poster with most posts but no title.
Posts: 11129
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

### Re: Flow

korona wrote:I don't think that this operation is in any way better than a simple "Iteratively execute function until result = 0". It seems to me that it is an artificial abstraction that does not even enhance parallel performance.

Because variables do not have a "current state" in Flow? Until result=0 -- which result? The final value of result that you set once, and never read from before it is set because code that wants to read from it cannot run?

It is a particular abstraction of iterative (or recursive) state that is explicit.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

korona
Posts: 495
Joined: Sun Jul 04, 2010 8:40 pm UTC

### Re: Flow

Yakk wrote:Because variables do not have a "current state" in Flow? Until result=0 -- which result?

Until the iterated function returns 0 e.g. f(f(f(x))) = 0. No artificial constructions are required to define that without a notation of global state.
EDIT: while statements do not violate the no-current-state paradigm either if you allow phi functions as in SSA. (and without them the language won't be turing complete i.e. you cannot describe the iterate-list operation without them (in imperative pseudo code with constant values only))
I simply do not see why we have to replace known and working concepts with some artificial constructions without gaining any advantage.
Global variables and escaping ("escape" as in "escape-analysis") references to memory are things that inhibit parallelization; while statements and function iteration don't