I\'m building an application which contains two components - server written in Haskell, and client written in Qt (C++). I\'m using thrift to communicate them, and I wonder why i
Everyone is pointing out that is the culprit is the thrift library, but I'll focus on your code (and where I can help getting some speed)
Using a simplified version of your code, where you calculate itemsv:
testfunc mtsize = itemsv
where size = i32toi $ fromJust mtsize
item i = Item (Just $ Vector.fromList $ map itoi32 [i..100])
items = map item [0..(size-1)]
itemsv = Vector.fromList items
First, you have many intermediate data being created in item i. Due to lazyness, those small and fast to calculate vectors becomes delayed thunks of data, when we could had them right away.
Having 2 carefully placed $!, that represent strict evaluation :
item i = Item (Just $! Vector.fromList $! map itoi32 [i..100])
Will give you a 25% decrease in runtime (for size 1e5 and 1e6).
But there is a more problematic pattern here: you generate a list to convert it as a vector, in place of building the vector directly.
Look those 2 last lines, you create a list -> map a function -> transform into a vector.
Well, vectors are very similar to list, you can do something similar! So you'll have to generate a vector -> vector.map over it and done. No more need to convert a list into a vector, and maping on vector is usually faster than a list!
So you can get rid of items and re-write the following itemsv:
itemsv = Vector.map item $ Vector.enumFromN 0 (size-1)
Reapplying the same logic to item i, we eliminate all lists.
testfunc3 mtsize = itemsv
where
size = i32toi $! fromJust mtsize
item i = Item (Just $! Vector.enumFromN (i::Int32) (100- (fromIntegral i)))
itemsv = Vector.map item $ Vector.enumFromN 0 (size-1)
This has a 50% decrease over the initial runtime.