Why is my GPU slower than CPU when training LSTM/RNN models?

前端 未结 4 1681
爱一瞬间的悲伤
爱一瞬间的悲伤 2020-12-29 02:52

My machine has the following spec:

CPU: Xeon E5-1620 v4

GPU: Titan X (Pascal)

Ubuntu 16.04

Nvidia driver 375.26

CUDA tookit 8.0

4条回答
  •  青春惊慌失措
    2020-12-29 03:12

    Too small batch size. Try to increase.

    Results for my GTX1050Ti:

    imdb_bidirectional_lstm.py
    batch_size      time
    32 (default)    252
    64              131
    96              87
    128             66
    
    imdb_lstm.py
    batch_size      time
    32 (default)    108
    64              50
    96              34
    128             25
    

提交回复
热议问题