Why tensorflow is slower with GPU instead of CPU?

假如想象 提交于 2021-01-29 08:54:48

问题


This is a really simple neural network:

n_pts = 500000
np.random.seed(0)
Xa = np.array([np.random.normal(13, 2, n_pts),
           np.random.normal(12, 2, n_pts)]).T
Xb = np.array([np.random.normal(8, 2, n_pts),
           np.random.normal(6, 2, n_pts)]).T

X = np.vstack((Xa, Xb))
y = np.matrix(np.append(np.zeros(n_pts), np.ones(n_pts))).T


# Create a new Keras model
model = Sequential()
model.add(Dense(units=1, input_shape=(2,), activation='sigmoid'))
adam = Adam(lr=0.1)
model.compile(adam, loss='binary_crossentropy', metrics=['accuracy'])
h = model.fit(x=X, y=y, verbose=1, batch_size=100000, epochs=15, shuffle='true')

I increase the batch size up to 100k but the cpu is faster than the gpu (9 second vs 12 with high batch size and more than 4x faster with smaller batch size) The cpu is the intel i7-8850H and the GPU is the Nvidia Quadro p600 4gb. I installed tensorflow 1.14.0. With a more complex network like this one:

model = Sequential()
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 
3), activation='elu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='elu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='elu'))
model.add(Convolution2D(64, 3, 3, activation='elu'))

model.add(Convolution2D(64, 3, 3, activation='elu'))
# model.add(Dropout(0.5))

model.add(Flatten())

model.add(Dense(100, activation = 'elu'))
#   model.add(Dropout(0.5))

 model.add(Dense(50, activation = 'elu'))
#   model.add(Dropout(0.5))

model.add(Dense(10, activation = 'elu'))
#   model.add(Dropout(0.5))

model.add(Dense(1))

optimizer = Adam(lr=1e-3)
model.compile(loss='mse', optimizer=optimizer)

will a GPU be faster than the cpu? What is necessary to do to take advantage of the gpu power?


回答1:


GPUs work best with massively parallel workloads, your simple model is not able to achieve that. Data needs to be transfered between CPU and GPU, so if this overhead is bigger than the actual computation, then the CPU will most likely be faster, as no transfer overhead happens.

Only a much bigger model would be able to profit from GPU acceleration.



来源:https://stackoverflow.com/questions/57115833/why-tensorflow-is-slower-with-gpu-instead-of-cpu

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!