Float16 slower than float32 in keras

前端 未结 2 1763
终归单人心
终归单人心 2020-12-30 01:30

I\'m testing out my new NVIDIA Titan V, which supports float16 operations. I noticed that during training, float16 is much slower (~800 ms/step) than float32 (~500 ms/step)

2条回答
  •  长情又很酷
    2020-12-30 02:09

    From the documentation of cuDNN (section 2.7, subsection Type Conversion) you can see:

    Note: Accumulators are 32-bit integers which wrap on overflow.

    and that this holds for the standard INT8 data type of the following: the data input, the filter input and the output.

    Under those assumptions, @jiandercy is right that there's a float16 to float32 conversion and then back-conversion before returning the result, and float16 would be slower.

提交回复
热议问题