deep-learning

ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray). in trying to predict tesla stock

左心房为你撑大大i 提交于 2020-12-11 08:47:04
问题 In the end you can see that i have tried converting this into a numpy array but I don't understand why tensorflow dosen't support it? I have looked at the other related pages but none seemed to help. Is there some other format i have to do to the data in order to properly fit in model? this is what keras says: x Vector, matrix, or array of training data (or list if the model has multiple inputs). If all inputs in the model are named, you can also pass a list mapping input names to data. x can

'Dense' object has no attribute 'op'

落爺英雄遲暮 提交于 2020-12-08 06:11:37
问题 I am trying to make a fully connected model using tensorflow.keras, here is my code from tensorflow.keras.models import Model from tensorflow.keras.layers import Input, Dense, Flatten def load_model(input_shape): input = Input(shape = input_shape) dense_shape = input_shape[0] x = Flatten()(input) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense

logits and labels must be broadcastable error in Tensorflow RNN

泪湿孤枕 提交于 2020-12-08 05:48:10
问题 I am new to Tensorflow and deep leaning. I am trying to see how the loss decreases over 10 epochs in my RNN model that I created to read a dataset from kaggle which contains credit card fraud data. I am trying to classify the transactions as fraud(1) and not fraud(0). When I try to run the below code I keep getting the below error: > 2018-07-30 14:59:33.237749: W > tensorflow/core/kernels/queue_base.cc:277] > _1_shuffle_batch/random_shuffle_queue: Skipping cancelled enqueue attempt with queue

Shape of image after MaxPooling2D with padding ='same' --calculating layer-by-layer shape in convolution autoencoder

时光总嘲笑我的痴心妄想 提交于 2020-12-06 18:46:32
问题 Very briefly my question relates to image-size not remaining the same as the input image size after a maxpool layer when I use padding = 'same' in Keras code. I am going through the Keras blog: Building Autoencoders in Keras. I am building Convolution autoencoder. The autoencoder code is as follows: input_layer = Input(shape=(28, 28, 1)) x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_layer) x = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(8, (3, 3), activation='relu',

Shape of image after MaxPooling2D with padding ='same' --calculating layer-by-layer shape in convolution autoencoder

£可爱£侵袭症+ 提交于 2020-12-06 18:40:30
问题 Very briefly my question relates to image-size not remaining the same as the input image size after a maxpool layer when I use padding = 'same' in Keras code. I am going through the Keras blog: Building Autoencoders in Keras. I am building Convolution autoencoder. The autoencoder code is as follows: input_layer = Input(shape=(28, 28, 1)) x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_layer) x = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(8, (3, 3), activation='relu',

Tensorflow 2.0 InvalidArgumentError: assertion failed: [Condition x == y did not hold element-wise:]

我们两清 提交于 2020-12-06 12:20:24
问题 i am training a mnist CNN. When i ran my code the problem is coming . I tried other answers but they do not work. I am a new to TensorFlow so can someone explain me this error. Here is my code. i am using Pycharm 2020.2. and Python 3.6 in anaconda. There is no help i could find. import tensorflow as tf from tensorflow.keras.models import Sequential mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = tf.keras.utils.normalize(x_train, axis=1) x

How to enforce monotonicity for (regression) model outputs in Keras?

半世苍凉 提交于 2020-12-06 02:40:29
问题 I am currently working on a problem where I provide a neural network with an input variable a , and another input x which is a monotonically increasing sequence of N numbers. So my network would basically looks something like this: a_input = Input(shape=[1], name='a') x_input = Input(shape=[N], name='x') nn = concatenate([a_input, x_input]) nn = Dense(100, activation='relu')(nn) nn = Dense(N, activation='relu')(nn) model = Model(inputs=[a_input, x_input], outputs=[nn]) model.compile(loss=

Why Bert transformer uses [CLS] token for classification instead of average over all tokens?

强颜欢笑 提交于 2020-12-01 12:00:50
问题 I am doing experiments on bert architecture and found out that most of the fine-tuning task takes the final hidden layer as text representation and later they pass it to other models for the further downstream task. Bert's last layer looks like this : Where we take the [CLS] token of each sentence : Image source I went through many discussion on this huggingface issue, datascience forum question, github issue Most of the data scientist gives this explanation : BERT is bidirectional, the [CLS]

Why Bert transformer uses [CLS] token for classification instead of average over all tokens?

旧巷老猫 提交于 2020-12-01 12:00:35
问题 I am doing experiments on bert architecture and found out that most of the fine-tuning task takes the final hidden layer as text representation and later they pass it to other models for the further downstream task. Bert's last layer looks like this : Where we take the [CLS] token of each sentence : Image source I went through many discussion on this huggingface issue, datascience forum question, github issue Most of the data scientist gives this explanation : BERT is bidirectional, the [CLS]

Change the input size in Keras

前提是你 提交于 2020-11-30 16:58:06
问题 I have trained a fully convolutional neural network with Keras. I have used the Functional API and have defined the input layer as Input(shape=(128,128,3)) , corresponding to the size of the images in my training set. However, I want to use the trained model on images of variable sizes (which should be ok because the network is fully convolutional). To do this, I need to change my input layer to Input(shape=(None,None,3)) . The obvious way to solve the problem would have been to train my