How can I get the LLVM IR dump from XLA in TensorFlow?

这一生的挚爱 提交于 2019-12-21 02:48:09

问题


I am trying to get the LLVM IR generated by the XLA Compiler in TensorFlow. I know that the entire LLVM Context is contained in the llvm_module object. This is then converted to a string with the utility function llvm_ir::DumpModuleToString(*llvm_module) function in the Compile() function in the file: //tensorflow/compiler/xla/service/cpu.cpu_compiler.cc.

But I have been trying to log it using VLOG(2) from tensorflow/core/logging.h. No logs are shown. However, the remaining VLOG(2) statements from other files are logged in my Python run.

>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))
2017-03-10 22:36:43.226843: I tensorflow/compiler/xla/service/platform_util.cc:58] platform Host present with 8 visible devices
2017-03-10 22:36:43.227931: I tensorflow/compiler/xla/service/service.cc:183] XLA service 0x2821510 executing computations on platform Host. Devices:
2017-03-10 22:36:43.227951: I tensorflow/compiler/xla/service/service.cc:191]   StreamExecutor device (0): <undefined>, <undefined>
b'Hello, TensorFlow!'

回答1:


[FYI I can't leave comments, since I just joined and apparently don't have a reputation yet.]

First off, make sure to read this, including the starred blue boxes. In particular note that turning on XLA for your whole session only performs JIT for GPU, and not CPU at the moment. https://www.tensorflow.org/performance/xla/jit

Now let's assume you've got everything set up correctly. The program in your example won't use XLA to compile for 2 reasons:

  1. As @mrry has noted, XLA doesn't handle strings.
  2. Even if you replaced the string with a number, you still wouldn't see any IR dump, because it's just a single constant, and XLA will have constant-folded it away.

In the comments you mentioned running on mnist_softmax, presumably following the instructions on the link above. If you're indeed compiling and running on CPU, the only remaining issue is using VLOG(2). VLOG is only enabled if you set command-line flags to turn it on.

So try replacing your VLOG(2) with LOG(INFO), and you should see the IR dump in your logs.



来源:https://stackoverflow.com/questions/42724165/how-can-i-get-the-llvm-ir-dump-from-xla-in-tensorflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!