bazel

Can I instruct bazel to emit a “.elf” suffix to executables?

回眸只為那壹抹淺笑 提交于 2019-12-11 10:35:33
问题 cc_binary (on osx / linux) creates executables with no suffix. This makes sense, since the standard on those platforms is to not use extensions. When using Bazel as a cross-compiler through a custom CROSSTOOL, though, I'd like Bazel to emit an elf file with an explicit .elf suffix. Is this possible, either through CROSSTOOL or a custom "rename" rule? 回答1: You can name your cc_binary 'foo.elf' and bazel will just build it. Or you can use genrule to do the renaming afterwards. Now if you need

Fabric and Crashlytics not working with Bazel build

三世轮回 提交于 2019-12-11 09:08:57
问题 I'am building an android app which uses fabric-crashlytics for logging all the crashes in devices. This particular line of code: Crashlytics crashlyticsKit = new Crashlytics.Builder() .core(new CrashlyticsCore.Builder().disabled(false).build()) .build(); Fabric.with(this, crashlyticsKit); is crashing with Bazel, but, when I set this value to "true"..(i.e., I'am disabling Crashlytics ), this application's Bazel build is working fine. But, If I'am building a normal android app it's not crashing

Tensorflow and Bazel c++

六眼飞鱼酱① 提交于 2019-12-11 07:32:15
问题 I'm trying to build tensorflow C++ from sources but no success. I followed different tutorials, but each time, there is a different error. What I want to do is to create a library so I can use it with Qt. I followed this tutorial because it was exactly what I wanted: https://tuatini.me/building-tensorflow-as-a-standalone-project/ (build on Ubuntu, not on raspberry) It works fine until I have to use babel. The tutorial says I have to run this command: bazel build -c opt --verbose_failures /

Error thrown in Google's Inception-v3 fine-tuning script

ε祈祈猫儿з 提交于 2019-12-11 06:19:48
问题 When following the Readme to fine-tune Google's Inception-v3 image classification model, I get the error: File "/Path/to/Model/bazel-bin/inception/flowers_train.runfiles/inception/inception/slim/ops.py", line 88, in batch_norm initializer=tf.zeros_initializer(), TypeError: zeros_initializer() takes at least 1 argument (0 given) This occurs after running the final command: bazel-bin/inception/flowers_train \ --train_dir="${TRAIN_DIR}" \ --data_dir="${FLOWERS_DATA_DIR}" \ --pretrained_model

How can I get bazel 0.26 (or older) for CentOS?

微笑、不失礼 提交于 2019-12-11 05:56:24
问题 I need to compile Tensorflow from source using Bazel. As doing so on Ubuntu and then using it on CentOS does not seem to work properly, I want to build tensorflow from source directly on CentOS. The official Bazel homepage says: "The Bazel team does not provide official packages for Fedora and CentOS. Vincent Batts (@vbatts) generously maintains unofficial packages on Fedora COPR." However there I only find the .repo files for 0.27, which does not support the build process for Tensorflow. Is

How to debug c++ code using Bazel on Windows 10 OS

余生颓废 提交于 2019-12-11 05:46:35
问题 Recently I start to play with the Bazel and I face a problem with debugging the app, I can debug with g++ but I can't debug the Bazel generated .exe file. Thank you for looking on this. Also, I build the source code with the Bazel and VsCode tasks. What .exe file should I target to debug my application? Is my setup Ok.? It is possible to debug a c++ source code using Bazel on Windows.? Should I use minGW for debugin a bazel generated executable or some othertools or debuggers.? Versions OS:

How to build and debug a c++ executable using Bazel on windows 10 x64

有些话、适合烂在心里 提交于 2019-12-11 05:35:45
问题 I want to debug a c++ code using the generated executable with bazel, but for some reason, the bazel don't build the code on x64 architecture or the executable does not work on debug mode. My files are main.cpp #include <iostream> int main() { int a = 3; int b = 5; int c = a + b; /* code */ std::cout << "Hello world" << std::endl; return 0; } I use this command to build my app bazel build //src/app:main --strip=never --compilation_mode=dbg but when I try to debug the app after I set

User name in .bazelrc

旧时模样 提交于 2019-12-11 04:29:18
问题 I would like to add this to my .bazelrc, but the $(whoami) doesn't expand like if it was in a shell. startup --output_user_root=/tmp/bazel/out/$(whoami) It produces the literal result: /tmp/bazel/out/$(whoami)/faedb999bdce730c9c495251de1ca1a4/execroot/__main__/bazel-out/ Is there any way to do what I want: adding a name/hash to the option in the .bashrc file? Edit: what I really want is to set the outputRoot to /tmp/bazel/out without using an environment variable and to let bazel create it's

compile.sh - The system is out of resources

烂漫一生 提交于 2019-12-11 02:34:54
问题 I am trying to compile bazel from source on my NVIDIA Jetson TK1. When trying to run compile.sh I get the following error: ubuntu@tegra-ubuntu:~/bazelArtefact/bazel-0.14.1-dist$ ./compile.sh 🍃 Building Bazel from scratch../usr/lib/jvm/java-8-oracle/bin/javac -classpath third_party/asm/asm-analysis-6.0.jar:third_party/asm/asm-6.0-sources.jar:third_party/asm/asm-tree-6.0-sources.jar:third_party/asm/asm-commons-6.0.jar:third_party/asm/asm-6.0.jar:third_party/asm/asm-commons-6.0-sources.jar:third

Tensorflow Custom Compile on Windows

≯℡__Kan透↙ 提交于 2019-12-11 02:11:57
问题 So, I've installed Bazel via Chocolatey, installed Python 3.5 and 2.7, installed CUDA v8, and cuDNN v6, and installed JDK 8.0, I'm now trying to custom-build TensorFlow on my Windows 10 device, with AVX, AVX 2 and CUDA. TensorFlow-GPU, the pre-built version, does work, I've already tested and run that successfully. I've followed the instructions of other articles, both on TensorFlows' actual site (trying to adapt some sections from the Linux/Mac installs), and on here. The furthest I've made