Tensorflow matmul calculations on GPU are slower than on CPU

杀马特。学长 韩版系。学妹 提交于 2019-12-19 10:37:12

问题


I'm experimenting with GPU computations for the first time and was hoping for a big speed-up, of course. However with a basic example in tensorflow, it actually was worse:

On cpu:0, each of the ten runs takes on average 2 seconds, gpu:0 takes 2.7 seconds and gpu:1 is 50% worse than cpu:0 with 3 seconds.

Here's the code:

import tensorflow as tf
import numpy as np
import time
import random

for _ in range(10):
    with tf.Session() as sess:
        start = time.time()
        with tf.device('/gpu:0'): # swap for 'cpu:0' or whatever
            a = tf.constant([random.random() for _ in xrange(1000 *1000)], shape=[1000, 1000], name='a')
            b = tf.constant([random.random() for _ in xrange(1000 *1000)], shape=[1000, 1000], name='b')
            c = tf.matmul(a, b)
            d = tf.matmul(a, c)
            e = tf.matmul(a, d)
            f = tf.matmul(a, e)
            for _ in range(1000):
                sess.run(f)
        end = time.time()
        print(end - start)

What am I observing here? Is run time maybe mainly dominated by copying data between RAM and GPU?


回答1:


The way you use to generate data is executed on CPU (random.random() is a regular python function and not TF-one). Also, executing it 10^6 times will be slower than requesting 10^6 random numbers in one run. Change the code to:

a = tf.random_uniform([1000, 1000], name='a')
b = tf.random_uniform([1000, 1000], name='b')

so that the data will be generated on a GPU in parallel and no time will be wasted to transfer it from RAM to GPU.



来源:https://stackoverflow.com/questions/40738493/tensorflow-matmul-calculations-on-gpu-are-slower-than-on-cpu

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!