ghc

Bit Size of GHC's Int Type

送分小仙女□ 提交于 2019-11-30 17:08:46
Why is GHC's Int type not guaranteed to use exactly 32 bits of precision? This document claim it has at least 30-bit signed precision. Is it somehow related to fitting Maybe Int or similar into 32-bits? It is to allow implementations of Haskell that use tagging. When using tagging you need a few bits as tags (at least one, two is better). I'm not sure there currently are any such implementations, but I seem to remember Yale Haskell used it. Tagging can somewhat avoid the disadvantages of boxing, since you no longer have to box everything; instead the tag bit will tell you if it's evaluated etc

Why does GHC consider the LHS *syntactically* when inlining?

为君一笑 提交于 2019-11-30 16:32:47
问题 According to the GHC docs: ...GHC will only inline the function if it is fully applied, where "fully applied" means applied to as many arguments as appear (syntactically) on the LHS of the function definition. Where the example given is two semantically-equivalent definitions: comp1 :: (b -> c) -> (a -> b) -> a -> c {-# INLINE comp1 #-} comp1 f g = \x -> f (g x) comp2 :: (b -> c) -> (a -> b) -> a -> c {-# INLINE comp2 #-} comp2 f g x = f (g x) My questions: Is it only in the presence of

Bit Size of GHC's Int Type

人走茶凉 提交于 2019-11-30 16:27:38
问题 Why is GHC's Int type not guaranteed to use exactly 32 bits of precision? This document claim it has at least 30-bit signed precision. Is it somehow related to fitting Maybe Int or similar into 32-bits? 回答1: It is to allow implementations of Haskell that use tagging. When using tagging you need a few bits as tags (at least one, two is better). I'm not sure there currently are any such implementations, but I seem to remember Yale Haskell used it. Tagging can somewhat avoid the disadvantages of

Basic I/O performance in Haskell

ぐ巨炮叔叔 提交于 2019-11-30 15:53:11
问题 Another microbenchmark: Why is this "loop" (compiled with ghc -O2 -fllvm , 7.4.1, Linux 64bit 3.2 kernel, redirected to /dev/null ) mapM_ print [1..100000000] about 5x slower than a simple for-cycle in plain C with write(2) non-buffered syscall? I am trying to gather Haskell gotchas. Even this slow C solution is much faster than Haskell int i; char buf[16]; for (i=0; i<=100000000; i++) { sprintf(buf, "%d\n", i); write(1, buf, strlen(buf)); } 回答1: Okay, on my box the C code, compiled per gcc

Basic I/O performance in Haskell

别等时光非礼了梦想. 提交于 2019-11-30 15:38:50
Another microbenchmark: Why is this "loop" (compiled with ghc -O2 -fllvm , 7.4.1, Linux 64bit 3.2 kernel, redirected to /dev/null ) mapM_ print [1..100000000] about 5x slower than a simple for-cycle in plain C with write(2) non-buffered syscall? I am trying to gather Haskell gotchas. Even this slow C solution is much faster than Haskell int i; char buf[16]; for (i=0; i<=100000000; i++) { sprintf(buf, "%d\n", i); write(1, buf, strlen(buf)); } Daniel Fischer Okay, on my box the C code, compiled per gcc -O3 takes about 21.5 seconds to run, the original Haskell code about 56 seconds. So not a

GHC rewrite rule specialising a function for a type class

女生的网名这么多〃 提交于 2019-11-30 12:09:58
Using the GHC RULES pragma , it is possible to specialise a polymorphic function for specific types. Example from the Haskell report: genericLookup :: Ord a => Table a b -> a -> b intLookup :: Table Int b -> Int -> b {-# RULES "genericLookup/Int" genericLookup = intLookup #-} This would make GHC use intLookup on an integer-indexed table and the generic version otherwise, where intLookup would probably be more efficient. I would like to accomplish something similar, using functions like the following (slightly simplified) ones: lookup :: Eq a => [(a, b)] -> a -> b lookupOrd :: Ord a => [(a, b)]

Haskell : display/get list of all user defined functions

£可爱£侵袭症+ 提交于 2019-11-30 11:47:11
Is there a command in Haskell which displays (or get as a list of) all the user defined functions which have been loaded/defined in the GHCi? Thanks To see bindings you've made at the ghci prompt (e.g. with let or <- ), try :show bindings . If you've loaded some modules, you can use :show modules to get the names of loaded modules and then :browse ModuleName to list everything in scope from that module. When in ghci, use :browse or just :bro after loading the file. You may also browse unloaded modules via :browse Foo.Bar.Baz . 来源: https://stackoverflow.com/questions/10272094/haskell-display

Why were type classes difficult to implement?

时光怂恿深爱的人放手 提交于 2019-11-30 11:44:30
问题 On slide 30/78 of this presentation, Simon suggests that implementation of type classes was a "despair" at the beginning. Is anybody aware why that was? 回答1: I guess I'm one of the few people who have first hand experience of why it was hard, since I implemented it in hbc when there was no prior art. So what was clear from the Wadler&Blott paper was that type checking was an extension of Hindley-Milner type checking and that at runtime you should be passing dictionaries around. From that to

Can a `ST`-like monad be executed purely (without the `ST` library)?

一世执手 提交于 2019-11-30 11:12:20
问题 This post is literate Haskell. Just put in a file like "pad.lhs" and ghci will be able to run it. > {-# LANGUAGE GADTs, Rank2Types #-} > import Control.Monad > import Control.Monad.ST > import Data.STRef Okay, so I was able to figure how to represent the ST monad in pure code. First we start with our reference type. Its specific value is not really important. The most important thing is that PT s a should not be isomorphic to any other type forall s . (In particular, it should be isomorphic

Debugging a memory leak that doesn't show on heap profiling

假如想象 提交于 2019-11-30 10:45:54
I'm working on a Haskell daemon that receives and processes JSON requests. While the operations of the daemon are complex, the main structure is intentionally kept simple: Its internal state is just an IORef with a data structure and all threads perform atomic operations on this IORef . Then there are a few threads that upon a trigger take the value a do something with it. The problem is that the daemon is leaking memory and I can't find out why. It's certainly related to the requests: when the daemon is getting several requests per second, it leaks something like 1MB/s (as reported by the