lazy-sequences

Type variance error in Scala when doing a foldLeft over Traversable views

倾然丶 夕夏残阳落幕 提交于 2019-12-24 00:39:58
问题 I am trying concatenate a series of Traversable views in Scala using a foldLeft operator and am hitting type variance errors that I don't understand. I can use reduce to concatenate a list of Traversable views like so. val xs = List(1,2,3,4).map(Traversable(_).view).reduce((a: TraversableView[Int, Traversable[_]], b: TraversableView[Int, Traversable[_]]) => a ++ b) // TraversableView[Int,Traversable[_]] // xs.force returns Traversable[Int] = List(1, 2, 3, 4) (Note that I have to write the

Can I read n files lazily as a single IO operation in Haskell?

99封情书 提交于 2019-12-23 10:49:18
问题 How can I read multiple files as a single ByteString lazily with constant memory? readFiles :: [FilePath] -> IO ByteString I currently have the following implementation but from what I have seen from profiling as well as my understanding I will end with n-1 of the files in memory. readFiles = foldl1 joinIOStrings . map ByteString.readFile where joinIOStrings ml mr = do l <- ml r <- mr return $ l `ByteString.append` r I understand that the flaw here is that I am applying the IO actions then

Clojure: lazy magic

社会主义新天地 提交于 2019-12-21 07:36:06
问题 Almost 2 identical programs to generate infinite lazy seqs of randoms. The first doesn't crash. The second crash with OutOfMemoryError exception. Why? ;Return infinite lazy sequence of random numbers (defn inf-rand[] (lazy-seq (cons (rand) (inf-rand)))) ;Never returns. Burns the CPU but won't crash and lives forever. (last (inf-rand)) But the following crash pretty quickly: ;Return infinite lazy sequence of random numbers (defn inf-rand[] (lazy-seq (cons (rand) (inf-rand)))) (def r1 (inf-rand

Lazy sequences in R

冷暖自知 提交于 2019-12-21 03:32:21
问题 In Clojure, it's easy to create infinite sequences using the lazy sequence constructor. For example, (def N (iterate inc 0)) returns a data object N which is equivalent to the infinite sequence (0 1 2 3 ...) Evaluating the value N results in an infinite loop. Evaluating (take 20 N) returns the top 20 numbers. Since the sequence is lazy, the inc function is only iterated when you ask it to. Since Clojure is homoiconic, the lazy sequence is stored recursively. In R, is it possible to do

Printing a tree lazily in Newick format

北城以北 提交于 2019-12-19 04:14:32
问题 I wish to print a binary tree in Newick format, showing each node's distance to its parent. At the moment I haven't had an issue with the following code, which uses regular recursion, but a tree too deep may produce a stack overflow. (defn tree->newick [tree] (let [{:keys [id children to-parent]} tree dist (double to-parent)] ; to-parent may be a rational (if children (str "(" (tree->newick (first children)) "," (tree->newick (second children)) "):" dist) (str (name id) ":" dist)))) (def

mapcat breaking the lazyness

☆樱花仙子☆ 提交于 2019-12-18 20:04:23
问题 I have a function that produces lazy-sequences called a-function. If I run the code: (map a-function a-sequence-of-values) it returns a lazy sequence as expected. But when I run the code: (mapcat a-function a-sequence-of-values) it breaks the lazyness of my function. In fact it turns that code into (apply concat (map a-function a-sequence-of-values)) So it needs to realize all the values from the map before concatenating those values. What I need is a function that concatenates the result of

mapcat breaking the lazyness

蹲街弑〆低调 提交于 2019-12-18 20:04:13
问题 I have a function that produces lazy-sequences called a-function. If I run the code: (map a-function a-sequence-of-values) it returns a lazy sequence as expected. But when I run the code: (mapcat a-function a-sequence-of-values) it breaks the lazyness of my function. In fact it turns that code into (apply concat (map a-function a-sequence-of-values)) So it needs to realize all the values from the map before concatenating those values. What I need is a function that concatenates the result of

double stream feed to prevent unneeded memoization?

假如想象 提交于 2019-12-17 05:14:14
问题 I'm new to Haskell and I'm trying to implement Euler's Sieve in stream processing style. When I checked the Haskell Wiki page about prime numbers, I found some mysterious optimization technique for streams. In 3.8 Linear merging of that wiki: primesLME = 2 : ([3,5..] `minus` joinL [[p*p, p*p+2*p..] | p <- primes']) where primes' = 3 : ([5,7..] `minus` joinL [[p*p, p*p+2*p..] | p <- primes']) joinL ((x:xs):t) = x : union xs (joinL t) And it says “ The double primes feed is introduced here to

double stream feed to prevent unneeded memoization?

丶灬走出姿态 提交于 2019-12-17 05:13:32
问题 I'm new to Haskell and I'm trying to implement Euler's Sieve in stream processing style. When I checked the Haskell Wiki page about prime numbers, I found some mysterious optimization technique for streams. In 3.8 Linear merging of that wiki: primesLME = 2 : ([3,5..] `minus` joinL [[p*p, p*p+2*p..] | p <- primes']) where primes' = 3 : ([5,7..] `minus` joinL [[p*p, p*p+2*p..] | p <- primes']) joinL ((x:xs):t) = x : union xs (joinL t) And it says “ The double primes feed is introduced here to

lazy-seq and stack overflow for infinite sequences

断了今生、忘了曾经 提交于 2019-12-13 18:24:25
问题 I am trying to show the importance of lazy-sequences or lazy-evaluation to the non-FP programmers. I have written this implementation of prime-generation to show the concept: (defn primes-gen [sieve] (if-not (empty? sieve) (let [prime (first sieve)] (cons prime (lazy-seq (primes-gen (filter (fn [x] (not= 0 (mod x prime))) (rest sieve)))))))) ;;;;; --------- TO SHOW ABOUT THE LAZY-THINGS ;; (take 400 (primes-gen (iterate inc 2))) ;; (take 400 (primes-gen (range 2 1000000000000N))) However, i