How can I get the LLVM IR dump from XLA in TensorFlow?

别等时光非礼了梦想. 提交于 2019-12-03 08:51:41

[FYI I can't leave comments, since I just joined and apparently don't have a reputation yet.]

First off, make sure to read this, including the starred blue boxes. In particular note that turning on XLA for your whole session only performs JIT for GPU, and not CPU at the moment. https://www.tensorflow.org/performance/xla/jit

Now let's assume you've got everything set up correctly. The program in your example won't use XLA to compile for 2 reasons:

  1. As @mrry has noted, XLA doesn't handle strings.
  2. Even if you replaced the string with a number, you still wouldn't see any IR dump, because it's just a single constant, and XLA will have constant-folded it away.

In the comments you mentioned running on mnist_softmax, presumably following the instructions on the link above. If you're indeed compiling and running on CPU, the only remaining issue is using VLOG(2). VLOG is only enabled if you set command-line flags to turn it on.

So try replacing your VLOG(2) with LOG(INFO), and you should see the IR dump in your logs.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!