Multi-threading and Single-threading performance issues in CPU-bound task

冷暖自知 提交于 2021-01-28 05:31:05

问题


The two following single-threading and multi-threading scripts are taking the same time when I give as input a big number like 555550000

single thread

import threading, time
a=[]
def print_factors(x):
   for i in range(1, x + 1):
       if x % i == 0:
           a.append(i)

n=int(input("Please enter a large number"))
print ("Starting time is %s" % ( time.ctime(time.time()) ))
print("The factors of",n,"are:")
thread = threading.Thread(target=print_factors,args=(n,))
thread.start()
thread.join()
print("Finishing time is %s" % (time.ctime(time.time())))
print(a)

multi thread

import threading, time
a=[]
def print_factors1(x):
   for i in range(1, int(x/2)):
       if x % i == 0:
           a.append(i)

def print_factors2(x):
    for i in range(int(x/2), x+1):
        if x % i == 0:
            a.append(i)

n=int(input("Please enter a large number"))
print ("Starting time is %s" % ( time.ctime(time.time()) ))
thread1 = threading.Thread(target=print_factors1,args=(n,))
thread2 = threading.Thread(target=print_factors2,args=(n,))
print("The factors of",n,"are:")
thread1.start()
thread2.start()
thread2.join()
print("Finishing time is %s" % (time.ctime(time.time())))
print(a)

I am trying to understand the difference between single-threading and multi-threading in terms of time taken to got the results.
I'm measuring similar timings for both types and I cannot figuring out the reasons.


回答1:


Your problem is GIL, the Global Interpreter Lock.

The Python Global Interpreter Lock or GIL, in simple words, is a mutex (or a lock) that allows only one thread to hold the control of the Python interpreter.

You can found detailed informations about GIL here (just a fast search on Google and you can find a lot more sources):

  • https://wiki.python.org/moin/GlobalInterpreterLock
  • What is the global interpreter lock (GIL) in CPython?
  • https://medium.com/python-features/pythons-gil-a-hurdle-to-multithreaded-program-d04ad9c1a63

You need to change your implementation to use processes instead of threads.
I changed your script as follows:

from multiprocessing import Pool
import time
def print_factors1(x):
    a=[]
    for i in range(1, int(x/2)):
        if x % i == 0:
            a.append(i)
    return a

def print_factors2(x):
    a=[]
    for i in range(int(x/2), x+1):
        if x % i == 0:
            a.append(i)
    return a

if __name__ == '__main__':
    n=int(input("Please enter a large number"))
    pool = Pool(processes=2)
    print ("Starting time is %s" % ( time.ctime(time.time()) ))

    process1 = pool.apply_async(print_factors1,[n])
    process2 = pool.apply_async(print_factors2,[n])

    pool.close()
    pool.join()

    print("Finishing time is %s" % (time.ctime(time.time())))
    print("The factors of",n,"are:")
    print(process1.get())
    print(process2.get())

Take into account that threads share the memory, processes don't.



来源:https://stackoverflow.com/questions/62154869/multi-threading-and-single-threading-performance-issues-in-cpu-bound-task

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!