can somebody explain why is the following trivial code (implementation of Euclid\'s algorithm to find greatest common denominator) about 3 times slower then equivalent code
I can't replicate your result. The python code appears to be 4 times faster than the ruby code:
2010-12-07 13:49:55:~/tmp$ time python iter_gcd.py 4000 3000
61356305
real 0m14.655s
user 0m14.633s
sys 0m0.012s
2010-12-07 13:43:26:~/tmp$ time ruby iter_gcd.rb 4000 3000
iter_gcd.rb:14: warning: don't put space before argument parentheses
61356305
real 0m54.298s
user 0m53.955s
sys 0m0.028s
Versions:
2010-12-07 13:50:12:~/tmp$ ruby --version
ruby 1.8.7 (2010-06-23 patchlevel 299) [i686-linux]
2010-12-07 13:51:52:~/tmp$ python --version
Python 2.6.6
Also, the python code can be made 8% faster:
def gcd(m, n):
if n > m:
m, n = n, m
while n:
n, m = m % n, n
return m
def main(a1, a2):
print sum(
gcd(i,j)
for j in xrange(a1, 1, -1)
for i in xrange(1, a2)
)
if __name__ == '__main__':
from sys import argv
main(int(argv[1]), int(argv[2]))
Later: when I install and use ruby 1.9.1, the ruby code is way faster:
2010-12-07 14:01:08:~/tmp$ ruby1.9.1 --version
ruby 1.9.2p0 (2010-08-18 revision 29036) [i686-linux]
2010-12-07 14:01:30:~/tmp$ time ruby1.9.1 iter_gcd.rb 4000 3000
61356305
real 0m12.137s
user 0m12.037s
sys 0m0.020s
I think your question is really, "Why is ruby 1.9.x so much faster than ruby 1.8.x?"