repa

REPA: computeS and computeP?

左心房为你撑大大i 提交于 2020-02-25 04:56:47
问题 I am trying this REPA library, and i want to process an image in both ways, parallel and sequentially . I can read the image (with other library, DevIL) and process it with computeP (parallel). Here is the code (is from a example on the wiki of haskell). import Foreign.Ptr import System.Environment import Data.Word import Data.Array.Repa hiding ((++)) import Data.Array.Repa.IO.DevIL import Data.Array.Repa.Repr.ForeignPtr main :: IO () main = do [f] <- getArgs (RGB v) <- runIL $ readImage f

how to resolve use of operators in type declaration (Repa) ?

空扰寡人 提交于 2020-01-06 19:33:00
问题 I am playing around with Repa, and the code below can compile and run. import qualified Data.Array.Repa as R --t:: R.Array R.U (R.Z R.:. Int) Float --t = R.fromListUnboxed (R.Z R.:. (10::Int)) ([1.0..10]::[Float]) main = do let x = R.fromListUnboxed (R.Z R.:. (10::Int)) ([1.0..10]::[Float]) print x I believe (from check in ghci) that x has the type signature that I have declared t to have, but I get this error if I uncomment everything associated with t: Illegal operator ‘R.:.’ in type ‘R.Z R

deepSeqArray of a single precision array

让人想犯罪 __ 提交于 2019-12-10 18:53:22
问题 I have a vector of Float elements created by the getVectorFloat function. In order to make some measurements, I need to use deepSeqArray . However, I don't manage to do it. Here is my example: import Data.Array.Repa as R import Data.Array.Repa.Algorithms.Randomish len :: Int len = 3000000 main = do ws <- getVectorFloat len ws `deepSeqArray` return() getVectorFloat :: Int -> Array DIM1 Float getVectorFloat len = R.map (toFloat) (getVectorDouble len) toFloat :: Double -> Float toFloat a =

Why is there no mapM for repa arrays?

百般思念 提交于 2019-12-10 14:36:34
问题 Background I am using repa more as a "management" tool. I pass around reactive-banana s AddHandlers in an Array : Array D DIM2 (AddHandler Bool) . Currently I am using this kludge: mapMArray :: (Monad m, R.Source r a, R.Shape sh) => (a -> m b) -> Array r sh a -> m (Array D sh b) mapMArray f a = do l <- mapM f . R.toList $ a return $ R.fromFunction sh (\i -> l !! R.toIndex sh i) where sh = R.extent a So I can do something like this: makeNetworkDesc :: Frameworks t => Array D DIM2 (AddHandler

Do Accelerate and Repa have different use cases?

烂漫一生 提交于 2019-12-04 16:13:12
问题 I've been playing around with Repa and Accelerate - they're both interesting, but I can't work out when I'd use one and when the other. Are they growing together, rivals, or just for different problems? 回答1: Repa is a library for efficient array construction and traversal, programmed in Haskell and run in the Haskell runtime. Repa relies on GHC's optimizer and threads for performance. You can mix arbitrary Haskell code with Repa (Repa functions such as map take Haskell functions as parameters

Poor performance with transpose and cumulative sum in Repa

戏子无情 提交于 2019-12-03 15:14:08
问题 I have developed a cumulative sum function as defined below in the Haskell library Repa. However, I have run into an issue when combining this function with the transpose operation. All 3 of the following operations take well under a second: cumsum $ cumsum $ cumsum x transpose $ transpose $ transpose x transpose $ cumsum x However, if I write: cumsum $ transpose x performance degrades horrendously. While each individual operation in isolation takes well under a second on a 1920x1080 image,

Do Accelerate and Repa have different use cases?

落爺英雄遲暮 提交于 2019-12-03 10:15:28
I've been playing around with Repa and Accelerate - they're both interesting, but I can't work out when I'd use one and when the other. Are they growing together, rivals, or just for different problems? Repa is a library for efficient array construction and traversal, programmed in Haskell and run in the Haskell runtime. Repa relies on GHC's optimizer and threads for performance. You can mix arbitrary Haskell code with Repa (Repa functions such as map take Haskell functions as parameters). Accelerate is an embedded language for GPU programming. Accelerate relies on its own compiler and GPU

Parallel mapM on Repa arrays

时光毁灭记忆、已成空白 提交于 2019-12-03 08:55:59
问题 In my recent work with Gibbs sampling , I've been making great use of the RVar which, in my view, provides a near ideal interface to random number generation. Sadly, I've been unable to make use of Repa due to the inability to use monadic actions in maps. While clearly monadic maps can't be parallelized in general, it seems to me that RVar may be at least one example of a monad where effects can be safely parallelized (at least in principle; I'm not terribly familiar with the inner workings

Poor performance with transpose and cumulative sum in Repa

故事扮演 提交于 2019-12-03 04:51:34
I have developed a cumulative sum function as defined below in the Haskell library Repa. However, I have run into an issue when combining this function with the transpose operation. All 3 of the following operations take well under a second: cumsum $ cumsum $ cumsum x transpose $ transpose $ transpose x transpose $ cumsum x However, if I write: cumsum $ transpose x performance degrades horrendously. While each individual operation in isolation takes well under a second on a 1920x1080 image, when combined they now take 30+ seconds... Any ideas on what could be causing this? My gut tells me it

'Repa' performance for planetary simulation

一曲冷凌霜 提交于 2019-12-03 04:04:20
问题 I have written a simulation of the outer planets of the solar system using the Euler symplectic method and implemented this a) using repa and b) using yarr . yarr seems to perform about x30 quicker than repa. Given this, I didn't even try to use parallelism. Is there any obvious performance problems in my repa code? The repository is at github. I can produce a cut-down repa -only version if this is helpful, but then you won't get the performance comparison against yarr . Alternatively, how do