deep-learning

module 'tensorflow.python.keras.api._v2.keras.layers' has no attribute 'CuDNNLSTM'

≯℡__Kan透↙ 提交于 2020-03-21 14:23:37
问题 When I write tf.keras.layers.LSTM , I get the warning Note that this layer is not optimized for performance. Please use tf.keras.layers.CuDNNLSTM for better performance on GPU. But when I change the layer to tf.keras.layers.CuDNNLSTM , I get the error AttributeError: module 'tensorflow.python.keras.api._v2.keras.layers' has no attribute 'CuDNNLSTM' Tensorflow version is 2.0.0-alpha0, Keras version is 2.2.4-tf. How can I fix this problem? 回答1: In general, in TensorFlow 2.0 we should just use:

How to use deep learning models for time-series forecasting?

柔情痞子 提交于 2020-03-18 12:19:24
问题 I have signals recorded from machines (m1, m2, so on) for 28 days. (Note: each signal in each day is 360 length long). machine_num, day1, day2, ..., day28 m1, [12, 10, 5, 6, ...], [78, 85, 32, 12, ...], ..., [12, 12, 12, 12, ...] m2, [2, 0, 5, 6, ...], [8, 5, 32, 12, ...], ..., [1, 1, 12, 12, ...] ... m2000, [1, 1, 5, 6, ...], [79, 86, 3, 1, ...], ..., [1, 1, 12, 12, ...] I want to predict the signal sequence of each machine for next 3 days. i.e. in day29 , day30 , day31 . However, I don't

Should I normalize my features before throwing them into RNN?

烂漫一生 提交于 2020-03-17 12:06:30
问题 I am playing some demos about recurrent neural network. I noticed that the scale of my data in each column differs a lot. So I am considering to do some preprocess work before I throw data batches into my RNN. The close column is the target I want to predict in the future. open high low volume price_change p_change ma5 ma10 \ 0 20.64 20.64 20.37 163623.62 -0.08 -0.39 20.772 20.721 1 20.92 20.92 20.60 218505.95 -0.30 -1.43 20.780 20.718 2 21.00 21.15 20.72 269101.41 -0.08 -0.38 20.812 20.755 3

Pytorch reshape tensor dimension

a 夏天 提交于 2020-03-17 09:53:07
问题 For example, I have 1D vector with dimension (5). I would like to reshape it into 2D matrix (1,5). Here is how I do it with numpy >>> import numpy as np >>> a = np.array([1,2,3,4,5]) >>> a.shape (5,) >>> a = np.reshape(a, (1,5)) >>> a.shape (1, 5) >>> a array([[1, 2, 3, 4, 5]]) >>> But how can I do that with Pytorch Tensor (and Variable). I don't want to switch back to numpy and switch to Torch variable again, because it will loss backpropagation information. Here is what I have in Pytorch >>

Keras- Embedding layer

佐手、 提交于 2020-03-17 09:20:38
问题 What does input_dim , output_dim and input_length mean in: Embedding(input_dim, output_dim, input_length) From the documentation I understand: input_dim : int > 0. Size of the vocabulary output_dim : int >= 0. Dimension of the dense embedding. input_length : Length of input sequences So, when my input is a word like google.com each character represented by an integer [5, 2, 2, 5, 8, 3, 4, 1, 2, 9] and maximum word length possible is 75 . Maximum characters possible is 38 . How should I decide

Keras Tokenizer num_words doesn't seem to work

北城以北 提交于 2020-03-17 08:31:28
问题 >>> t = Tokenizer(num_words=3) >>> l = ["Hello, World! This is so&#$ fantastic!", "There is no other world like this one"] >>> t.fit_on_texts(l) >>> t.word_index {'fantastic': 6, 'like': 10, 'no': 8, 'this': 2, 'is': 3, 'there': 7, 'one': 11, 'other': 9, 'so': 5, 'world': 1, 'hello': 4} I'd have expected t.word_index to have just the top 3 words. What am I doing wrong? 回答1: There is nothing wrong in what you are doing. word_index is computed the same way no matter how many most frequent words

Predicted Image id and box from SSD

醉酒当歌 提交于 2020-03-05 01:40:47
问题 How to find predicted image id and Box from SSD, I am using this GitHub link here is the test function which I want to save the image id and box def test(loader, net, criterion, device): net.eval() running_loss = 0.0 running_regression_loss = 0.0 running_classification_loss = 0.0 num = 0 for _, data in enumerate(loader): images, boxes, labels = data images = images.to(device) boxes = boxes.to(device) labels = labels.to(device) num += 1 with torch.no_grad(): confidence, locations = net(images)

ValueError: Error when checking input: expected dense_1_input to have shape (180,) but got array with shape (1,)

最后都变了- 提交于 2020-03-04 18:55:12
问题 My learning model is as follows (using Keras). model = Sequential() model.add(Dense(100, activation='relu', input_shape = (X_train.shape[0],))) model.add(Dense(500, activation='relu')) model.add(Dense(2, activation='softmax')) My input data X_train is an array of shape (180,) and the corresponding y_train containing labels is also an array of shape (180,). I tried to compile and fit the model as follows. model.compile(loss="sparse_categorical_crossentropy", optimizer="adam", metrics=[

Where can I find the source code for the Google Cloud Platform Deep Learning VM images and Deep Learning Containers?

 ̄綄美尐妖づ 提交于 2020-03-04 18:43:26
问题 GCP gives a general overview of what's installed in Deep Learning VMs, but seeing the actual shell scripts would make it easier to determine the exact differences between VM images, debug any deployment issues, and create derivative images. Someone already asked about the Dockerfiles for Deep Learning Containers, but I figured I'd repeat the question to increase the odds of it getting answered. 回答1: You can create and set up a local deep learning container. Have a look at the official

Merged 1D-CNN and 2D-CNN

亡梦爱人 提交于 2020-03-04 06:53:35
问题 I want to build a merged CNN model using 1D and 2D CNN but i tried many ways to build it but this one worked with me but i don't know why i get this error when using model_combined.summary(). I have attached two images which contain the summary of 1D & 2D CNN summary of 1D CNN , summary of 2D CNN Thank you very much! ValueError Traceback (most recent call last) <ipython-input-20-3c58e6d04c4d> in <module>() 60 #opt = RMSprop(lr=0.001, rho=0.9) 61 model_combined.compile(optimizer=opt, loss=