Tensorflow 2.0 model using tf.function very slow and is recompiling every time the train count changes. Eager runs about 4x faster

五迷三道 提交于 2019-12-02 21:05:16

I analyzed this behavior of @tf.function here Using a Python native type.

In short: the design of tf.function does not automatically do the boxing of Python native types to tf.Tensor objects with a well-defined dtype.

If your function accepts a tf.Tensor object, on the first call the function is analyzed, the graph is built and associated with that function. In every non-first call, if the dtype of the tf.Tensor object matches, the graph is reused.

But in case of using a Python native type, the graphg is being built every time the function is invoked with a different value.

In short: design your code to use tf.Tensor everywhere instead of the Python variables if you plan to use @tf.function.

tf.function is not a wrapper that magically accelerates a function that works well in eager mode; is a wrapper that requires to design the eager function (body, input parameters, dytpes) understanding what will happen once the graph is created, in order to get real speed ups.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!