tensorflow

Illegal Instruction: 4 error when running any Tensorflow program

ⅰ亾dé卋堺 提交于 2021-02-05 07:35:23
问题 I am trying to train a Tensorflow Convolutional Neural Network, and I am always getting a cryptic error regardless of the environment in which i run the program. In Jupyter Notebook, the kernel simply dies. In Terminal, I get "Illegal Instruction: 4" with no Traceback. In Pycharm, I get: "Process finished with exit code 132 (interrupted by signal 4: SIGILL)". I have looked all over the Internet and I have not found any instance in which this particular error was thrown in this situation. I

error: Illegal instruction (core dumped) - tensorflow==2.1.0

爷,独闯天下 提交于 2021-02-05 06:50:34
问题 I am importing tensorflow in my ubuntu (Lenovo 110-Ideapad laptop) python using following commands- (tfx-test) chandni@mxnet:~/Chandni/TFX$ python Python 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf Illegal instruction (core dumped) And the program exits. Kindly let me know the reason. 回答1: You may need to downgrade to CPU 1.5. #Try running pip uninstall tensorflow #And then pip

error: Illegal instruction (core dumped) - tensorflow==2.1.0

我们两清 提交于 2021-02-05 06:50:14
问题 I am importing tensorflow in my ubuntu (Lenovo 110-Ideapad laptop) python using following commands- (tfx-test) chandni@mxnet:~/Chandni/TFX$ python Python 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf Illegal instruction (core dumped) And the program exits. Kindly let me know the reason. 回答1: You may need to downgrade to CPU 1.5. #Try running pip uninstall tensorflow #And then pip

Tensorflow object detection mask rcnn uses too much memory

孤街醉人 提交于 2021-02-05 06:28:05
问题 I am trying to run TF object detection with mask rcnn, but it keeps dying on a node with 500GB of memory. I updated the models/research/object_detection/trainer.py ConfigProto to session_config = tf.ConfigProto(allow_soft_placement=True, intra_op_parallelism_threads=1, inter_op_parallelism_threads=1, device_count = {'CPU': 1}, log_device_placement=False) I updated the mask_rcnn_inception_resnet_v2_atrous_coco.config to train_config: { batch_queue_capacity: 500 num_batch_queue_threads: 8

Keras: Predict model within custom loss function

谁说胖子不能爱 提交于 2021-02-05 05:57:26
问题 I am trying to use some_model.predict(x) within a custom loss function. I found this custom loss function: _EPSILON = K.epsilon() def _loss_tensor(y_true, y_pred): y_pred = K.clip(y_pred, _EPSILON, 1.0-_EPSILON) out = -(y_true * K.log(y_pred) + (1.0 - y_true) * K.log(1.0 - y_pred)) return K.mean(out, axis=-1) But the problem is that model.predict() is expecting a numpy array. So I looked for how to convert a tensor ( y_pred ) to a numpy array. I found tmp = K.tf.round(y_true) but this returns

tensorflow multi GPU parallel usage

↘锁芯ラ 提交于 2021-02-05 04:55:53
问题 I want to use 8 gpus on parallel, not sequencely. For example, when I execute this code, import tensorflow as tf with tf.device('/gpu:0'): for i in range(10): print(i) with tf.device('/gpu:1'): for i in range(10, 20): print(i) I tried cmd command 'CUDA_VISIBLE_DEVICE='0,1' but result is same. I want to see the result "0 10 1 11 2 3 12 .... etc" But actual result is sequencely "0 1 2 3 4 5 ..... 10 11 12 13.." How can I get wanted result? 回答1: ** I see an edit with the question so adding this

tensorflow multi GPU parallel usage

做~自己de王妃 提交于 2021-02-05 04:53:32
问题 I want to use 8 gpus on parallel, not sequencely. For example, when I execute this code, import tensorflow as tf with tf.device('/gpu:0'): for i in range(10): print(i) with tf.device('/gpu:1'): for i in range(10, 20): print(i) I tried cmd command 'CUDA_VISIBLE_DEVICE='0,1' but result is same. I want to see the result "0 10 1 11 2 3 12 .... etc" But actual result is sequencely "0 1 2 3 4 5 ..... 10 11 12 13.." How can I get wanted result? 回答1: ** I see an edit with the question so adding this

低阶API示范

一笑奈何 提交于 2021-02-04 21:03:31
TensorFlow有5个不同的层次结构: 即 硬件层 , 内核层 , 低阶API , 中阶API , 高阶API 。 本章我们将以线性回归为例,直观对比展示在低阶API,中阶API,高阶API这三个层级实现模型的特点。 TensorFlow的层次结构从低到高可以分成如下五层。 最底层为硬件层,TensorFlow支持CPU、GPU或TPU加入计算资源池。 第二层为C++实现的内核,kernel可以跨平台分布运行。 第三层为Python实现的操作符,提供了封装C++内核的低级API指令,主要包括各种张量操作算子、计算图、自动微分. 如tf.Variable,tf.constant,tf.function,tf.GradientTape,tf.nn.softmax... 如果把模型比作一个房子,那么第三层API就是【模型之砖】。 第四层为Python实现的模型组件,对低级API进行了函数封装,主要包括各种模型层,损失函数,优化器,数据管道,特征列等等。 如tf.keras.layers,tf.keras.losses,tf.keras.metrics,tf.keras.optimizers,tf.data.Dataset,tf.feature_column... 如果把模型比作一个房子,那么第四层API就是【模型之墙】。 第五层为Python实现的模型成品

Set half of the filters of a layer as not trainable keras/tensorflow

放肆的年华 提交于 2021-02-04 14:56:22
问题 I'm trying to train a model suggested by this research paper where I set half of the filters of a convolution layer to Gabor filters and the rest are random weights which are initialized by default. Normally, if I have to set a layer as not trainable, I set the trainable attribute as False . But here I have to freeze only half of the filters of a layer and I have no idea how to do so. Any help would be really appreciated. I'm using Keras with Tensorflow backend. 回答1: How about making two

Set half of the filters of a layer as not trainable keras/tensorflow

坚强是说给别人听的谎言 提交于 2021-02-04 14:55:08
问题 I'm trying to train a model suggested by this research paper where I set half of the filters of a convolution layer to Gabor filters and the rest are random weights which are initialized by default. Normally, if I have to set a layer as not trainable, I set the trainable attribute as False . But here I have to freeze only half of the filters of a layer and I have no idea how to do so. Any help would be really appreciated. I'm using Keras with Tensorflow backend. 回答1: How about making two