deep-learning

unable to use Trained Tensorflow model

百般思念 提交于 2020-02-19 05:07:07
问题 I am new to Deep Learning and Tensorflow. I retrained a pretrained tensorflow inceptionv3 model as saved_model.pb to recognize different type of images but when I tried to use the fie with below code. with tf.Session() as sess: with tf.gfile.FastGFile("tensorflow/trained/saved_model.pb",'rb') as f: graph_def = tf.GraphDef() tf.Graph.as_graph_def() graph_def.ParseFromString(f.read()) g_in=tf.import_graph_def(graph_def) LOGDIR='/log' train_writer=tf.summary.FileWriter(LOGDIR) train_writer.add

I have a NVIDIA Quadro 2000 graphic card, and I want to install TensorFlow. Will it work?

耗尽温柔 提交于 2020-02-16 10:36:41
问题 I know Quadro 2000 is CUDA 2.1. My PC specs as follows: Quadro 2000 with 16GB RAM. Xeon(R) CPU W3520 @2.67GHz 2.66GHz Windows 10Pro. I want to use Tenserflow for Machine Learning, and Deep Learning. Let me know a little in-depth, as I am a beginner. 回答1: Your system is eligible to use TensorFlow but not with GPU because that requires GPU a having compute capability more than 3.0, and your GPU is only a compute capability 2.1 device. You can read more about it here. If you want to use GPU for

Tensorflow: how to concat tensor with specific index

百般思念 提交于 2020-02-06 09:52:07
问题 I have tensors like these: tensor_a = [[[[255,255,255]]], [[[100,100,100]]]] tensor_b = [[[[0.1,0.2]]], [[[0.3,0.4]]]] tensor_c = [[[[1]]], [[[2]]]] Today I try to concat these tensors above to tensor_d like: tensor_d = [[[[255,255,255,0.1,1]]], [[[100,100,100, 0.3, 2]]]] But I have no idea how to concat them. I had tried to using for loop to append tensor to list but that was too slow(under the shape of tensor_a:(10,64,64,3)) 回答1: You can use tensor manipulation such as tf.split and tf

Implementing an CLSTM but facing Dimension Error Problem

戏子无情 提交于 2020-02-06 07:24:06
问题 i am implementing a CLSTM based on This problem but facing error of dimensionality. Data set: I am currently working on one video, which has 4500 images of size (28,28) . The data set is in vectorized form so i get (4500,780) . I split the images using Timeseriessplit and reshape the images with x_train=x_train.reshape(-1, 28, 28, 1) x_test=x_test.reshape(-1, 28, 28, 1) My model is as follows model = models.Sequential() model.add(layers.ConvLSTM2D( filters=40, kernel_size=(3, 3), input_shape=

Implementing an CLSTM but facing Dimension Error Problem

不羁岁月 提交于 2020-02-06 07:24:05
问题 i am implementing a CLSTM based on This problem but facing error of dimensionality. Data set: I am currently working on one video, which has 4500 images of size (28,28) . The data set is in vectorized form so i get (4500,780) . I split the images using Timeseriessplit and reshape the images with x_train=x_train.reshape(-1, 28, 28, 1) x_test=x_test.reshape(-1, 28, 28, 1) My model is as follows model = models.Sequential() model.add(layers.ConvLSTM2D( filters=40, kernel_size=(3, 3), input_shape=

Best practise for video ground truthing?

左心房为你撑大大i 提交于 2020-02-06 06:24:08
问题 I would like to train a deep learning framework (TensorFlow) for object detection with a new object category. As source for the ground truthing I have multiple video files which contain the object (only part of the image contains the object). How should I ground truth the video? Should I extract frame by frame and label every frame even when those video frames will be quite similar? Or what would be best practise for such a task? Open source tools are preferred. 回答1: It usually works as you

Neural network isn't learning for a first few epochs on Keras

僤鯓⒐⒋嵵緔 提交于 2020-02-05 14:39:20
问题 I'm testing simple networks on Keras with TensorFlow backend and I ran into an issue with using sigmoid activation function The network isn't leraning for first 5-10 epochs, and then everything is fine. I tried using initializers and regularizers, but that only made it worse. I use the network like this: import numpy as np import keras from numpy import expand_dims from keras.preprocessing.image import ImageDataGenerator from matplotlib import pyplot # load the image (x_train, y_train), (x

Tensorflow seq2seq chatbot always give the same outputs

此生再无相见时 提交于 2020-02-04 12:16:04
问题 I'm trying to make a seq2seq chatbot with Tensorflow, but it seems to converge to the same outputs despite different inputs. The model gives different outputs when first initialized, but quickly converges to the same outputs after a few epochs. This is still an issue even after a lot of epochs and low costs. However, the models seems to do fine when trained with smaller datasets (say 20) but it fails with larger ones. I'm training on the Cornell Movie Dialogs Corpus with a 100-dimensional and

Keras Array Input Error

好久不见. 提交于 2020-02-04 05:28:28
问题 I get the following error: ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 6 arrays but instead got the following list of 3 arrays: [array([[ 0, 0, 0, ..., 18, 12, 1], [ 0, 0, 0, ..., 18, 11, 1], [ 0, 0, 0, ..., 18, 9, 1], ..., [ 0, 0, 0, ..., 18, 15, 1], [ 0, 0, 0, ..., 18, 9, ... in my keras model. I think the model is mistaking something? This happens when I feed input to my model.

Keras Array Input Error

坚强是说给别人听的谎言 提交于 2020-02-04 05:28:09
问题 I get the following error: ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 6 arrays but instead got the following list of 3 arrays: [array([[ 0, 0, 0, ..., 18, 12, 1], [ 0, 0, 0, ..., 18, 11, 1], [ 0, 0, 0, ..., 18, 9, 1], ..., [ 0, 0, 0, ..., 18, 15, 1], [ 0, 0, 0, ..., 18, 9, ... in my keras model. I think the model is mistaking something? This happens when I feed input to my model.