deep-learning

Normalization of input data in Keras

不打扰是莪最后的温柔 提交于 2020-05-13 05:35:07
问题 One common task in DL is that you normalize input samples to zero mean and unit variance. One can "manually" perform the normalization using code like this: mean = np.mean(X, axis = 0) std = np.std(X, axis = 0) X = [(x - mean)/std for x in X] However, then one must keep the mean and std values around, to normalize the testing data, in addition to the Keras model being trained. Since the mean and std are learnable parameters, perhaps Keras can learn them? Something like this: m = Sequential()

ssh AWS, Jupyter Notebook not showing up on web browser

血红的双手。 提交于 2020-05-13 05:34:56
问题 I am trying to use ssh connecting to AWS "Deep Learning AMI for Amazon Linux", and everything works fine except Jupyter Notebook. This is what I got: ssh -i ~/.ssh/id_rsa ec2-user@yy.yyy.yyy.yy gave me Last login: Wed Oct 4 18:01:23 2017 from 67-207-109-187.static.wiline.com ============================================================================= __| __|_ ) _| ( / Deep Learning AMI for Amazon Linux ___|\___|___| The README file for the AMI ➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜ /home/ec2-user/src/README

Normalization of input data in Keras

我只是一个虾纸丫 提交于 2020-05-13 05:34:32
问题 One common task in DL is that you normalize input samples to zero mean and unit variance. One can "manually" perform the normalization using code like this: mean = np.mean(X, axis = 0) std = np.std(X, axis = 0) X = [(x - mean)/std for x in X] However, then one must keep the mean and std values around, to normalize the testing data, in addition to the Keras model being trained. Since the mean and std are learnable parameters, perhaps Keras can learn them? Something like this: m = Sequential()

ssh AWS, Jupyter Notebook not showing up on web browser

被刻印的时光 ゝ 提交于 2020-05-13 05:34:06
问题 I am trying to use ssh connecting to AWS "Deep Learning AMI for Amazon Linux", and everything works fine except Jupyter Notebook. This is what I got: ssh -i ~/.ssh/id_rsa ec2-user@yy.yyy.yyy.yy gave me Last login: Wed Oct 4 18:01:23 2017 from 67-207-109-187.static.wiline.com ============================================================================= __| __|_ ) _| ( / Deep Learning AMI for Amazon Linux ___|\___|___| The README file for the AMI ➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜ /home/ec2-user/src/README

Cross validation in deep neural networks

主宰稳场 提交于 2020-05-13 04:11:32
问题 How do you perform cross-validation in a deep neural network? I know that to perform cross validation to will train it on all folds except one and test it on the excluded fold. Then do this for k fold times and average the accuries for each fold. How do you do this for each iteration. Do you update the parameters at each fold? Or you perform k-fold cross validation for each iteration? Or is each training on all folds but one fold considered as one iteration? 回答1: Cross-validation is a general

Validation loss when using Dropout

末鹿安然 提交于 2020-05-11 07:20:28
问题 I am trying to understand the effect of dropout on validation Mean Absolute Error (non-linear regression problem). Without dropout With dropout of 0.05 With dropout of 0.075 Without any dropouts the validation loss is more than training loss as shown in 1. My understanding is that the validation loss should only be slightly more than the training loss for a good fit. Carefully, I increased the dropout so that validation loss is close to the training loss as seen in 2. The dropout is only

Validation loss when using Dropout

情到浓时终转凉″ 提交于 2020-05-11 07:19:47
问题 I am trying to understand the effect of dropout on validation Mean Absolute Error (non-linear regression problem). Without dropout With dropout of 0.05 With dropout of 0.075 Without any dropouts the validation loss is more than training loss as shown in 1. My understanding is that the validation loss should only be slightly more than the training loss for a good fit. Carefully, I increased the dropout so that validation loss is close to the training loss as seen in 2. The dropout is only

How does Fine-tuning Word Embeddings work?

前提是你 提交于 2020-05-11 06:28:09
问题 I've been reading some NLP with Deep Learning papers and found Fine-tuning seems to be a simple but yet confusing concept. There's been the same question asked here but still not quite clear. Fine-tuning pre-trained word embeddings to task-specific word embeddings as mentioned in papers like Y. Kim, “Convolutional Neural Networks for Sentence Classification,” and K. S. Tai, R. Socher, and C. D. Manning, “Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks,”

Why does my training loss have regular spikes?

吃可爱长大的小学妹 提交于 2020-05-09 19:38:45
问题 I'm training the Keras object detection model linked at the bottom of this question, although I believe my problem has to do neither with Keras nor with the specific model I'm trying to train (SSD), but rather with the way the data is passed to the model during training. Here is my problem (see image below): My training loss is decreasing overall, but it shows sharp regular spikes: The unit on the x-axis is not training epochs, but tens of training steps. The spikes occur precisely once every

Using sample_weight in Keras for sequence labelling

倖福魔咒の 提交于 2020-05-09 19:25:58
问题 I am working on a sequential labeling problem with unbalanced classes and I would like to use sample_weight to resolve the unbalance issue. Basically if I train the model for about 10 epochs, I get great results. If I train for more epochs, val_loss keeps dropping, but I get worse results. I'm guessing the model just detects more of the dominant class to the detriment of the smaller classes. The model has two inputs, for word embeddings and character embeddings, and the input is one of 7