deep-learning

Resnet50 does not converge. VGG16 works fine

一曲冷凌霜 提交于 2020-01-16 08:23:15
问题 I trained one regression network using resnet50 as backbone. The input of the network is image whose size is 224*224*3 , the output of the network is one value , varying from 0 to 1 . but the netwrok can not converge, no matter I use sigmoid or relu as output layer's activation. mae or mse as loss function . For exampple, I use resnet50 as backbone, mae as loss function, sigmoid is the activation function of output layer. SGD as optimizer. The training loss would be: Epoch 1 training loss is

Resnet50 does not converge. VGG16 works fine

痴心易碎 提交于 2020-01-16 08:21:18
问题 I trained one regression network using resnet50 as backbone. The input of the network is image whose size is 224*224*3 , the output of the network is one value , varying from 0 to 1 . but the netwrok can not converge, no matter I use sigmoid or relu as output layer's activation. mae or mse as loss function . For exampple, I use resnet50 as backbone, mae as loss function, sigmoid is the activation function of output layer. SGD as optimizer. The training loss would be: Epoch 1 training loss is

Creating large LMDBs for Caffe with numpy arrays

邮差的信 提交于 2020-01-16 05:21:04
问题 I have two 60 x 80921 matrices, one filled with data, one with reference. I would like to store the values as key/value pairs in two different LMDBs, one for training (say I'll slice around the 60000 column mark) and one for testing. Here is my idea; does it work? X_train = X[:,:60000] Y_train = Y[:,:60000] X_test = X[:,60000:] Y_test = Y[:,60000:] X_train = X_train.astype(int) X_test = X_test.astype(int) Y_train = Y_train.astype(int) Y_test = Y_test.astype(int) map_size = X_train.nbytes * 10

MobileNets for a custom image size

点点圈 提交于 2020-01-15 18:05:29
问题 I want to use the MobileNet model pre-trained on ImageNet for feature extraction. I am loading the model as follows: from keras.applications.mobilenet import MobileNet feature_model = MobileNet(include_top=False, weights='imagenet', input_shape=(200, 200, 3)) The Keras manual clearly says that this input shape is valid: input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3,

MobileNets for a custom image size

梦想与她 提交于 2020-01-15 17:56:11
问题 I want to use the MobileNet model pre-trained on ImageNet for feature extraction. I am loading the model as follows: from keras.applications.mobilenet import MobileNet feature_model = MobileNet(include_top=False, weights='imagenet', input_shape=(200, 200, 3)) The Keras manual clearly says that this input shape is valid: input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3,

Keras model.predict function giving input shape error

妖精的绣舞 提交于 2020-01-15 10:36:26
问题 I have implemented universal sentence encoder in Tensorflow and now I am trying to predict the class probabilities on a sentence. I am converting the string to an array as well. Code: if model.model_type == "universal_classifier_basic": class_probs = model.predict(np.array(['this is a random sentence'], dtype=object) Error Message: InvalidArgumentError (see above for traceback): input must be a vector, got shape: [] [[Node: lambda_1/module_apply_default/tokenize/StringSplit = StringSplit[skip

ValueError: Error when checking : expected flatten_1_input to have shape (None, 4, 4, 512) but got array with shape (1, 150, 150, 3)

匆匆过客 提交于 2020-01-15 09:22:08
问题 I followed the guide at this link to build a model and stopped before the finetuning part to test the model on some other images using the following code: img_width, img_height = 150, 150 batch_size = 1 test_model = load_model('dog_cat_model.h5') validation_data_dir = "test1" test_datagen = ImageDataGenerator(rescale=1. / 255) validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width, img_height), batch_size=batch_size, shuffle=False, class_mode=

keras stuck during optimization

与世无争的帅哥 提交于 2020-01-15 09:19:31
问题 After trying the Keras example on CIFAR10, I decided to go for something bigger : a VGG-like net on the Tiny Imagenet dataset. This is a subset of the ImageNet dataset with 200 classes (instead of 1000) and 100K images downscaled to 64x64. I got the VGG-like model from the file vgg_like_convnet.py here. Unfortunately, things are going pretty much like here except that this time changing the learning rate or swapping TH for TF does not help. Neither changing the optimizer (see code below).

Optimize input image with class prior

佐手、 提交于 2020-01-15 07:13:29
问题 I'm trying to implement the first part of the google blog entry Inceptionism: Going Deeper into Neural Networks in TensorFlow. So far I have found several resources that either explain it in natural language or focus on other parts or give code snippets for other frameworks. I understand the idea of optimizing a random input image with respect to a class prior and also the maths behind it given in the this paper, section 2, but I'm not able to implement it myself using TensorFlow. From this

Optimize input image with class prior

大城市里の小女人 提交于 2020-01-15 07:11:16
问题 I'm trying to implement the first part of the google blog entry Inceptionism: Going Deeper into Neural Networks in TensorFlow. So far I have found several resources that either explain it in natural language or focus on other parts or give code snippets for other frameworks. I understand the idea of optimizing a random input image with respect to a class prior and also the maths behind it given in the this paper, section 2, but I'm not able to implement it myself using TensorFlow. From this