Xanthir wrote:I've spent a lot of time digesting them, and finally came to what I think is a really simple way of understanding them - a functor is a wrapper object around some arbitrary value(s), that has a well-behaved .map() method.
The values need not be there.
Hmm, I think I'm wrong on functor: I was reading it at [T]->(T->U)->U, not [T]->(T->U)->[U] (the second of which is map?)
What is [T]->(T->U)->U, just a bound argument without the function?
For some reason, I thought a functor was sort of a dual of the value, such that ft = [t]f = u for f:T->U. Looks like I was wrong.
A monadic functor is just a functor that knows how to smoosh nested instances of itself together, usually denoted through a .join() method.
So it has join which does [[T]]->[T]? (suffuciently nicely)
I've done .chain or .bind a few times in C++, but not by defining .join.
An applicative functor is a multi-arg version of a normal functor. The easier way for me to think of it is that it's a functor that knows how to smoosh together two *independent* versions of itself (siblings, not nested wrappers). This is often very similar to how you smoosh nested things, but not always; sometimes there are multiple well-behaved ways to smoosh together siblings and not all of them work for nested stuff. (Example: the most natural way to smoosh nested Lists is to flatten them into a single master List, and the analogous way to smoosh together sibling lists is to cross-product them, creating a list of tuples containing every pair of items from the two siblings. (Showing that this is analogous is interesting, if you don't see it immediately). There's another way to smoosh together sibling Lists, tho - pair up their values index-by-index, so you get a list who first value is a tuple of the two siblings' first values, second value is a tuple of the two siblings' second values, etc. This is traditionally called a ZipList, and it actually can't be generally extended to nested lists. Again, interesting to show why.) This lets you map variadic functions over a functor - if all the arguments are in the same type of functor, and it's applicative, you can smoosh them together into a single functor holding a tuple of the argument values, and then map the function over that.
The cross product case seems computationally expensive and impractical.
Simple operations that generate exponential work ... are usually best avoided?
However, it's traditionally implemented and understood in a different way, both because this second way is more natural in Haskell and related heavy-functional languages, and because it avoids having to deal with tuples ever. Instead, you put *the function you want to map* in an instance of the functor, and then call another method (traditionally called .ap()), passing it the argument functor. .ap() knows how to reach into each of the functors, pull out the function and value and call them, then put the result in a smooshed-together functor made from the earlier two wrappers (as I described above). If you curry the function, so being called with the first arg just produces another function that expects the second arg, you can repeat this until all the args are filled in and you're left with a single functor containing the result. Multi-arg mapping functions without ever having to actually acknowledge that functions can *have* multiple args!
And, in the above case, it maybe generates exponential amounts of list output? Well, not for non-list functors naturally.
(As it turns out, "monadic" is a subclass of "applicative" - if you can do nested-smooshing, there's a clever trick you can use to automatically get the analogous sibling-smooshing behavior. However, sibling-smooshing can often be done more efficiently if implemented directly.)
Then there's "traversable" functors, which are another subclass of "applicative" and have a .traverse() method that gives you a powerful way to iterate a mapping function over a "structured" functor (like Lists or Trees) without destroying the structure. Through ~clever tricks~, they let you automatically *swap* nested functors, so if your value is like A<B<value>>, you can automatically swap it to B<A<value>> as long as the A is traversable and the B is at least applicative, without having to know anything about what B is. This lets you get around a major weakness of monadic functors, which is that they're not composable - if you try to compose two monads (say you have a List of Options), then if you call them in the monadic way (the mapped function returns another instance of the composed monad), you'll end up with the value wrapped up like A<B<A<B<value>>>>, and neither A nor B's .join() method can work automatically. If B is traversable, tho, then you can map the swapper into the middle, giving you A<A<B<B<value>>>>, then call A's join once on the outside, getting A<B<B<value>>>, then map B's join in one level to get A<B<value>>, exactly where you wanted to be!
So the point of these axioms, within Haskell, is to have "kinds" of "type operations" (a list of X is a kind of type operation here?). If the type operation supports the right axioms, certain algorithms work on the type operations without knowing any more about the type operation, and the result of the algorithm can be described and has certain properties (which someone might desire).