tensorflow

tf.enable_eager_execution must be called at program startup ONLY in SPYDER IDE

馋奶兔 提交于 2021-02-10 03:17:59
问题 I have tried to perform an eager execution of a simple code. I've tried it on both Jupyter Notebook and Spyder IDE. With Jupyter I have no problem but when I execute the code in Spyder it returns an error: File "C:\...\lib\site-packages\tensorflow\python\framework\ops.py", line 5496, in enable_eager_execution "tf.enable_eager_execution must be called at program startup.") ValueError: tf.enable_eager_execution must be called at program startup. and the code is as follows: import tensorflow as

模型量化原理及tflite示例

被刻印的时光 ゝ 提交于 2021-02-09 12:02:53
模型量化 什么是量化 模型的weights数据一般是float32的,量化即将他们转换为int8的。当然其实量化有很多种,主流是int8/fp16量化,其他的还有比如 二进制神经网络:在运行时具有二进制权重和激活的神经网络,以及在训练时计算参数的梯度。 三元权重网络:权重约束为+1,0和-1的神经网络 XNOR网络:过滤器和卷积层的输入是二进制的。 XNOR 网络主要使用二进制运算来近似卷积。 现在很多框架或者工具比如nvidia的TensorRT,xilinx的DNNDK,TensorFlow,PyTorch,MxNet 等等都有量化的功能. 量化的优缺点 量化的优点很明显了,int8占用内存更少,运算更快,量化后的模型可以更好地跑在低功耗嵌入式设备上。以应用到手机端,自动驾驶等等。 缺点自然也很明显,量化后的模型损失了精度。造成模型准确率下降. 量化的原理 先来看一下计算机如何存储浮点数与定点数: 其中负指数决定了浮点数所能表达的绝对值最小的非零数;而正指数决定了浮点数所能表达的绝对值最大的数,也即决定了浮点数的取值范围。 float的范围为-2^128 ~ +2^128. 可以看到float的值域分布是极其广的。 说回量化的本质是: 找到一个映射关系,使得float32与int8能够一一对应 。那问题来了,float32能够表达值域是非常广的,而int8只能表达[0,255]

How to use Reshape keras layer with two None dimension?

喜你入骨 提交于 2021-02-09 09:35:41
问题 I have a keras 3D/2D model. In this model a 3D layer has a shape of [None, None, 4, 32]. I want to reshape this into [None, None, 128]. However, if I simply do the following: reshaped_layer = Reshape((-1, 128))(my_layer) my_layer has a shape of [None, 128] and therefore I cannot apply afterwards any 2D convolution, like: conv_x = Conv2D(16, (1,1))(reshaped_layer) I've tried to use tf.shape(my_layer) and tf.reshape, but I have not been able to compile the model since tf.reshape is not a Keras

How to disable Tensorflow's multi-threading?

僤鯓⒐⒋嵵緔 提交于 2021-02-09 08:18:45
问题 I'm running Tensorflow programs with a simulator that does not support multi-threading. I changed intra_op_parallelism_threads to 1 in tensorflow/core/common_runtime/local_device.cc at line 38 but I still get runtime errors as soon as the treading starts. My guess is the multi-theading setup is still there. Is it possible to disable multi-threading? 回答1: You cannot, as of early 2017, disable tensorflow's multithreading. 来源: https://stackoverflow.com/questions/44767233/how-to-disable

How to disable Tensorflow's multi-threading?

て烟熏妆下的殇ゞ 提交于 2021-02-09 08:18:25
问题 I'm running Tensorflow programs with a simulator that does not support multi-threading. I changed intra_op_parallelism_threads to 1 in tensorflow/core/common_runtime/local_device.cc at line 38 but I still get runtime errors as soon as the treading starts. My guess is the multi-theading setup is still there. Is it possible to disable multi-threading? 回答1: You cannot, as of early 2017, disable tensorflow's multithreading. 来源: https://stackoverflow.com/questions/44767233/how-to-disable

Why keras use “call” instead of __call__?

我的未来我决定 提交于 2021-02-09 07:09:35
问题 I fond the following code in (https://www.tensorflow.org/tutorials/eager/custom_layers) class MyDenseLayer(tf.keras.layers.Layer): def __init__(self, num_outputs): super(MyDenseLayer, self).__init__() self.num_outputs = num_outputs def build(self, input_shape): self.kernel = self.add_variable("kernel", shape=[int(input_shape[-1]), self.num_outputs]) def call(self, input): return tf.matmul(input, self.kernel) The last two lines is call method, while it does not like usual python class method

{ “error”: “inputs is a plain value/list, but expecting an object as multiple input tensors required as per tensorinfo_map” }

懵懂的女人 提交于 2021-02-09 07:00:36
问题 I am using tensorflow serving to deploy my model . my tensorinfo map is saved_model_cli show --dir /export/1/ --tag_set serve --signature_def serving_default The given SavedModel SignatureDef contains the following input(s): inputs['length_0'] tensor_info: dtype: DT_INT32 shape: (-1) name: serving_default_length_0:0 inputs['length_1'] tensor_info: dtype: DT_INT32 shape: (-1) name: serving_default_length_1:0 inputs['length_2'] tensor_info: dtype: DT_INT32 shape: (-1) name: serving_default

{ “error”: “inputs is a plain value/list, but expecting an object as multiple input tensors required as per tensorinfo_map” }

可紊 提交于 2021-02-09 07:00:33
问题 I am using tensorflow serving to deploy my model . my tensorinfo map is saved_model_cli show --dir /export/1/ --tag_set serve --signature_def serving_default The given SavedModel SignatureDef contains the following input(s): inputs['length_0'] tensor_info: dtype: DT_INT32 shape: (-1) name: serving_default_length_0:0 inputs['length_1'] tensor_info: dtype: DT_INT32 shape: (-1) name: serving_default_length_1:0 inputs['length_2'] tensor_info: dtype: DT_INT32 shape: (-1) name: serving_default

ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: [None, 2584]

女生的网名这么多〃 提交于 2021-02-09 05:58:44
问题 I'm working in a project that isolate vocal parts from an audio. I'm using the DSD100 dataset, but for doing tests I'm using the DSD100subset dataset from I only use the mixtures and the vocals. I'm basing this work on this article First I process the audios to extract a spectrogram and put it on a list, with all the audios forming four lists (trainMixed, trainVocals, testMixed, testVocals). Like this: def to_spec(wav, n_fft=1024, hop_length=256): return librosa.stft(wav, n_fft=n_fft, hop

ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: [None, 2584]

穿精又带淫゛_ 提交于 2021-02-09 05:58:13
问题 I'm working in a project that isolate vocal parts from an audio. I'm using the DSD100 dataset, but for doing tests I'm using the DSD100subset dataset from I only use the mixtures and the vocals. I'm basing this work on this article First I process the audios to extract a spectrogram and put it on a list, with all the audios forming four lists (trainMixed, trainVocals, testMixed, testVocals). Like this: def to_spec(wav, n_fft=1024, hop_length=256): return librosa.stft(wav, n_fft=n_fft, hop