问题
In Haskell Wiki's Recursion in a monad there is an example that is claimed to be tail-recursive:
f 0 acc = return (reverse acc)
f n acc = do
v <- getLine
f (n-1) (v : acc)
While the imperative notation leads us to believe that it is tail-recursive, it's not so obvious at all (at least to me). If we de-sugar do
we get
f 0 acc = return (reverse acc)
f n acc = getLine >>= \v -> f (n-1) (v : acc)
and rewriting the second line leads to
f n acc = (>>=) getLine (\v -> f (n-1) (v : acc))
So we see that f
occurs inside the second argument of >>=
, not in a tail-recursive position. We'd need to examine IO
's >>=
to get an answer.
Clearly having the recursive call as the last line in a do
block isn't a sufficient condition a function to be tail-recursive.
Let's say that a monad is tail-recursive iff every recursive function in this monad defined as
f = do
...
f ...
or equivalently
f ... = (...) >>= \x -> f ...
is tail-recursive. My question is:
- What monads are tail-recursive?
- Is there some general rule that we can use to immediately distinguish tail-recursive monads?
Update: Let me make a specific counter-example: The []
monad is not tail-recursive according to the above definition. If it were, then
f 0 acc = acc
f n acc = do
r <- acc
f (n - 1) (map (r +) acc)
would have to be tail-recursive. However, desugaring the second line leads to
f n acc = acc >>= \r -> f (n - 1) (map (r +) acc)
= (flip concatMap) acc (\r -> f (n - 1) (map (r +) acc))
Clearly, this isn't tail-recursive, and IMHO cannot be made. The reason is that the recursive call isn't the end of the computation. It is performed several times and the results are combined to make the final result.
回答1:
A monadic computation that refers to itself is never tail-recursive. However, in Haskell you have laziness and corecursion, and that is what counts. Let's use this simple example:
forever :: (Monad m) => m a -> m b
forever c' = let c = c' >> c in c
Such a computation runs in constant space if and only if (>>)
is nonstrict in its second argument. This is really very similar to lists and repeat
:
repeat :: a -> [a]
repeat x = let xs = x : xs in xs
Since the (:)
constructor is nonstrict in its second argument this works and the list can be traversed, because you have a finite weak-head normal form (WHNF). As long as the consumer (for example a list fold) only ever asks for the WHNF this works and runs in constant space.
The consumer in the case of forever
is whatever interprets the monadic computation. If the monad is []
, then (>>)
is non-strict in its second argument, when its first argument is the empty list. So forever []
will result in []
, while forever [1]
will diverge. In the case of the IO
monad the interpreter is the very run-time system itself, and there you can think of (>>)
being always non-strict in its second argument.
回答2:
What really matters is constant stack space. Your first example is tail recursive modulo cons, thanks to the laziness.
The (getLine >>=)
will be executed and will evaporate, leaving us again with the call to f
. What matters is, this happens in a constant number of steps - there's no thunk build-up.
Your second example,
f 0 acc = acc
f n acc = concat [ f (n - 1) $ map (r +) acc | r <- acc]
will be only linear (in n
) in its thunk build-up, as the result list is accessed from the left (again due to the laziness, as concat
is non-strict). If it is consumed at the head it can run in O(1) space (not counting the linear space thunk, f(0), f(1), ..., f(n-1)
at the left edge ).
Much worse would be
f n acc = concat [ f (n-1) $ map (r +) $ f (n-1) acc | r <- acc]
or in do
-notation,
f n acc = do
r <- acc
f (n-1) $ map (r+) $ f (n-1) acc
because there is extra forcing due to information dependency. Similarly, if the bind for a given monad were a strict operation.
来源:https://stackoverflow.com/questions/13379060/under-what-circumstances-are-monadic-computations-tail-recursive