I\'ve got a program that reads in 3 strings per line for 50000. It then does other things. The part that reads the file and converts to integers is taking 80% of the total r
I'm able to get almost same timings as yours. I think the problem was with my code that was doing the timings:
read_james_otigo 40 msec big.txt
read_james_otigo_with_int_float 116 msec big.txt
read_map 134 msec big.txt
read_map_local 131 msec big.txt
read_numpy_loadtxt 400 msec big.txt
read_read 488 usec big.txt
read_readlines 9.24 msec big.txt
read_readtxt 4.36 msec big.txt
name time ratio comment
read_read 488 usec 1.00 big.txt
read_readtxt 4.36 msec 8.95 big.txt
read_readlines 9.24 msec 18.95 big.txt
read_james_otigo 40 msec 82.13 big.txt
read_james_otigo_with_int_float 116 msec 238.64 big.txt
read_map_local 131 msec 268.05 big.txt
read_map 134 msec 274.87 big.txt
read_numpy_loadtxt 400 msec 819.42 big.txt
read_james_otigo 39.4 msec big.txt
read_readtxt 4.37 msec big.txt
read_readlines 9.21 msec big.txt
read_map_local 131 msec big.txt
read_james_otigo_with_int_float 116 msec big.txt
read_map 134 msec big.txt
read_read 487 usec big.txt
read_numpy_loadtxt 398 msec big.txt
name time ratio comment
read_read 487 usec 1.00 big.txt
read_readtxt 4.37 msec 8.96 big.txt
read_readlines 9.21 msec 18.90 big.txt
read_james_otigo 39.4 msec 80.81 big.txt
read_james_otigo_with_int_float 116 msec 238.51 big.txt
read_map_local 131 msec 268.84 big.txt
read_map 134 msec 275.11 big.txt
read_numpy_loadtxt 398 msec 816.71 big.txt