问题
I would like to understand a little more about these two parameters: intra and inter op parallelism threads
session_conf = tf.ConfigProto(
intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
I read this post which has a pretty good explanation: TensorFlow: inter- and intra-op parallelism configuration
But I am seeking confirmations and also asking new questions below. And I am running my task in keras 2.0.9, tensorflow 1.3.0:
- when both are set to 1, does it mean that, on a computer with 4 cores for example, there will be only 1 thread shared by the four cores?
- why using 1 thread does not seem to affect my task very much in terms of speed? My network has the following structure: dropout, conv1d, maxpooling, lstm, globalmaxpooling,dropout, dense. The post cited above says that if there are a lot of matrix multiplication and subtraction operations, using a multiple thread setting can help. I do not know much about the math underneath but I'd imagine there are quite a lot of such matrix operations in my model? However, setting both params from 0 to 1 only sees a 1 minute slowdown over a 10 minute task.
- why multi-thread could be a source of non-reproducible results? See Results not reproducible with Keras and TensorFlow in Python. This is the main reason I need to use single threads as I am doing scientific experiments. And surely tensorflow has been improving over the time, why this is not addressed in the release?
Many thanks in advance
回答1:
When both parameters are set to 1, there will be 1 thread running on 1 of the 4 cores. The core on which it runs might change but it will always be 1 at a time.
When running something in parallel there is always a trade-off between lost time on communication and gained time through parallelization. Depending on the used hardware and the specific task (like the size of the matrices) the speedup will change. Sometimes running something in parallel will be even slower than using one core.
For example when using floats on a cpu,
(a + b) + c
will not be equal toa + (b + c)
because of the floating point precision. Using multiple parallel threads means that operations likea + b + c
will not always be computed in the same order, leading to different results on each run. However those differences are extremely small and will not effect the overall result in most cases. Completely reproducible results are usually only needed for debugging. Enforcing complete reproducibility would slow down multi-threading a lot.
回答2:
Answer to question 1 is "No".
Setting both the parameters to 1 (intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) will generate N threads, where N is the count of cores. I've tested it multiple times on different versions of TensorFlow. This is true even for latest version of TensorFlow. There are multiple questions on how to reduce the number of threads to 1 but with no clear answer. Some examples are
- How to stop TensorFlow from multi-threading
- https://github.com/usnistgov/frvt/issues/12
- Changing the number of threads in TensorFlow on Cifar10
- Importing TensorFlow spawns threads
- https://github.com/tensorflow/tensorflow/issues/13853
来源:https://stackoverflow.com/questions/47548145/understanding-tensorflow-inter-intra-parallelism-threads