deep-learning

Tensorflow's tensorflow variable_scope values parameter meaning

岁酱吖の 提交于 2020-01-20 08:10:24
问题 I am currently reading a source code for slim library that is based on Tensorflow and they use values argument for variable_scope method alot, like here. From the API page I can see: This context manager validates that the (optional) values are from the same graph, ensures that graph is the default graph, and pushes a name scope and a variable scope. My question is: variables from values are only being checked if they are from the same graph? What are the use cases for this and why someone

Understanding tf.contrib.lite.TFLiteConverter quantization parameters

天大地大妈咪最大 提交于 2020-01-19 14:17:12
问题 I'm trying to use UINT8 quantization while converting tensorflow model to tflite model: If use post_training_quantize = True , model size is x4 lower then original fp32 model, so I assume that model weights are uint8, but when I load model and get input type via interpreter_aligner.get_input_details()[0]['dtype'] it's float32. Outputs of the quantized model are about the same as original model. converter = tf.contrib.lite.TFLiteConverter.from_frozen_graph( graph_def_file='tflite-models/tf

Understanding tf.contrib.lite.TFLiteConverter quantization parameters

风格不统一 提交于 2020-01-19 14:16:34
问题 I'm trying to use UINT8 quantization while converting tensorflow model to tflite model: If use post_training_quantize = True , model size is x4 lower then original fp32 model, so I assume that model weights are uint8, but when I load model and get input type via interpreter_aligner.get_input_details()[0]['dtype'] it's float32. Outputs of the quantized model are about the same as original model. converter = tf.contrib.lite.TFLiteConverter.from_frozen_graph( graph_def_file='tflite-models/tf

How to calculate the number of parameters of an LSTM network?

佐手、 提交于 2020-01-19 02:55:12
问题 Is there a way to calculate the total number of parameters in a LSTM network. I have found a example but I'm unsure of how correct this is or If I have understood it correctly. For eg consider the following example:- from keras.models import Sequential from keras.layers import Dense, Dropout, Activation from keras.layers import Embedding from keras.layers import LSTM model = Sequential() model.add(LSTM(256, input_dim=4096, input_length=16)) model.summary() Output _____________________________

why to implement x=deepcopy(x) in the Torch7? [duplicate]

守給你的承諾、 提交于 2020-01-17 18:18:25
问题 This question already exists : what is mean by implement color_content_masks = deepcopy(color_content_masks) in the Torch7 code below? Closed 2 years ago . if is_pooling then for k = 1, #color_codes do color_content_masks[k] = image.scale(color_content_masks[k], math.ceil(color_content_masks[k]:size(2)/2), math.ceil(color_content_masks[k]:size(1)/2)) color_style_masks[k] = image.scale(color_style_masks[k], math.ceil(color_style_masks[k]:size(2)/2), math.ceil(color_style_masks[k]:size(1)/2))

I get an error while trying to customize my loss function

╄→尐↘猪︶ㄣ 提交于 2020-01-16 19:06:57
问题 I am trying to create a custom loss function for my deep learning model and I run into an error. I am going to give here an example of a code that is not what I want to use but if I understand how to make this little loss function work, then I think I'll be able to make my long loss function work. So I am pretty much asking for help to make this next function work, here it is. model.compile(optimizer='rmsprop',loss=try_loss(pic_try), metrics= ['accuracy']) def try_loss(pic): def try_2_loss(y

how to setup Caffe imagenet_solver.prototxt file for fewer jpgs, program exited after iteration 0

|▌冷眼眸甩不掉的悲伤 提交于 2020-01-16 18:19:28
问题 We need help to understand the parameters to use for smaller set of training (6000 jpgs) and val (170 jpgs) jpgs. Our execution was killed and exited after test score 0/1 in Iteration 0. We are trying to run the imagenet sample on the caffe website tutorial at http://caffe.berkeleyvision.org/gathered/examples/imagenet.html. Instead of using the full set of ILSVRC2 images in the package, we use our own training set of 6000 jpegs and val set of 170 jpeg images. They are each 256 x 256 jpeg

Converting keras.applications.resnet50 to a Sequential gives error

青春壹個敷衍的年華 提交于 2020-01-16 18:01:12
问题 I want to convert pretrained ResNet50 model from keras.application to a Sequential model but it gives input_shape error. Input 0 is incompatible with layer res2a_branch1: expected axis -1 of input shape to have value 64 but got shape (None, 25, 25, 256) I read this https://github.com/keras-team/keras/issues/9721 and as I understand the reason of error is skip_connections. Is there a way to convert it to a Sequential or how can I add my custom model to end of this ResNet Model. This is the

How to 3d visualize output layers during training phase

邮差的信 提交于 2020-01-16 08:43:29
问题 My final goal is to find a way to 3d visualize all layers during the training something like tensorSpace but real time during the training phase . I am using tensorflow and google colab. I managed to get each output like a tensor by creating a custom callback and creating this method def on_train_batch_end(self, batch, logs=None): for i in range(len(model_inception.layers)): get_layer_output = K.function(inputs = self.model.layers[i].input, outputs = self.model.layers[i].output) print('\n

Test net output #0: accuracy = 1 - Always- Caffe

試著忘記壹切 提交于 2020-01-16 08:34:12
问题 I'm always getting the same accuracy. When i run the classification, its always showing 1 label. I went through many articles and everyone recommending to shuffle the data. I did that using random.shuffle and also tried convert_imageset script as well but no help. Please find my solver.protoxt and caffenet_train.prototxt below. I have 1000 images in my dataset. 833 images in train_lmdb and rest of them in validation_lmdb. Training logs: I1112 22:41:26.373661 10633 solver.cpp:347] Iteration