machine-learning

Should Cross Validation Score be performed on original or split data?

空扰寡人 提交于 2021-02-11 14:46:21
问题 When I want to evaluate my model with cross validation, should I perform cross validation on original (data thats not split on train and test) or on train / test data? I know that training data is used for fitting the model, and testing for evaluating. If I use cross validation, should I still split the data into train and test, or not? features = df.iloc[:,4:-1] results = df.iloc[:,-1] x_train, x_test, y_train, y_test = train_test_split(features, results, test_size=0.3, random_state=0) clf =

Keras not running in multiprocessing

不打扰是莪最后的温柔 提交于 2021-02-11 14:18:15
问题 I'm trying to run my keras model using multiprocessing due to GPU OOM issue. I loaded all libraries and set up the model within the function for multiprocessing as below: When I execute this code, it gets stuck at history = q.get() , which is multiprocessing.Queue.get() . And when I remove all the code related to multiprocessing.Queue() , the code execution ends as soon as I execute the code, which I suspect that the code is not working. Even a simple print() function didn't show an output.

How to implement CAM without visualize_cam in this code?

妖精的绣舞 提交于 2021-02-11 14:01:01
问题 I want to make Class activation map, so I have write the code from keras.datasets import mnist from keras.layers import Conv2D, Dense, GlobalAveragePooling2D from keras.models import Model, Input from keras.utils import to_categorical (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train_resized = x_train.reshape((60000, 28, 28, 1)) x_test_resized = x_test.reshape((10000, 28, 28, 1)) y_train_hot_encoded = to_categorical(y_train) y_test_hot_encoded = to_categorical(y_test) inputs =

Why doesn't the Adadelta optimizer decay the learning rate?

倖福魔咒の 提交于 2021-02-11 12:32:07
问题 I have initialised an Adadelta optimizer in Keras (using Tensorflow backend) and assigned it to a model: my_adadelta = keras.optimizers.Adadelta(learning_rate=0.01, rho=0.95) my_model.compile(optimizer=my_adadelta, loss="binary_crossentropy") During training, I am using a callback to print the learning rate after every epoch: class LRPrintCallback(Callback): def on_epoch_end(self, epoch, logs=None): lr = self.model.optimizer.lr print(K.eval(lr)) However, this prints the same (initial)

How do you determing the correct dimension of Mel Spectrogram Feature Extraction for NN

末鹿安然 提交于 2021-02-11 12:26:34
问题 I trying to implement a Mel Spectrogram feature extraction: n_mels = 128 # Extracting MelFrequency Spectrum for every file def extract_features(file_name): try: audio, sample_rate = librosa.load(file_name, res_type='kaiser_fast') mely = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=n_mels) except Exception as e: print("Error encountered while parsing file: ", file) return None return mely.T It appears that I am implementing this feature extraction incorrectly as when I check

How do you determing the correct dimension of Mel Spectrogram Feature Extraction for NN

筅森魡賤 提交于 2021-02-11 12:25:38
问题 I trying to implement a Mel Spectrogram feature extraction: n_mels = 128 # Extracting MelFrequency Spectrum for every file def extract_features(file_name): try: audio, sample_rate = librosa.load(file_name, res_type='kaiser_fast') mely = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=n_mels) except Exception as e: print("Error encountered while parsing file: ", file) return None return mely.T It appears that I am implementing this feature extraction incorrectly as when I check

Reward function for learning to play Curve Fever game with DQN

会有一股神秘感。 提交于 2021-02-11 10:40:41
问题 I've made a simple version of Curve Fever also known as "Achtung Die Kurve". I want the machine to figure out how to play the game optimally. I copied and slightly modified an existing DQN from some Atari game examples that is made with Google's Tensorflow. I'm tyring to figure out an appropriate reward function. Currently, I use this reward setup: 0.1 for every frame it does not crash -500 for every crash Is this the right approach? Do I need to tweak the values? Or do I need a completely

How to save synthetic dataset in CSV file using SMOTE

*爱你&永不变心* 提交于 2021-02-11 08:26:30
问题 I am using Credit card data for oversampling using SMOTE. I am using the code written in geeksforgeeks.org (Link) After running the following code, it states something like that: print("Before OverSampling, counts of label '1': {}".format(sum(y_train == 1))) print("Before OverSampling, counts of label '0': {} \n".format(sum(y_train == 0))) # import SMOTE module from imblearn library # pip install imblearn (if you don't have imblearn in your system) from imblearn.over_sampling import SMOTE sm

Implementing Attention in Keras

梦想的初衷 提交于 2021-02-11 07:24:18
问题 I am trying to implement attention in keras over a simple lstm: model_2_input = Input(shape=(500,)) #model_2 = Conv1D(100, 10, activation='relu')(model_2_input) model_2 = Dense(64, activation='sigmoid')(model_2_input) model_2 = Dense(64, activation='sigmoid')(model_2) model_1_input = Input(shape=(None, 2048)) model_1 = LSTM(64, dropout_U = 0.2, dropout_W = 0.2, return_sequences=True)(model_1_input) model_1, state_h, state_c = LSTM(16, dropout_U = 0.2, dropout_W = 0.2, return_sequences=True,

Implementing Attention in Keras

我只是一个虾纸丫 提交于 2021-02-11 07:24:11
问题 I am trying to implement attention in keras over a simple lstm: model_2_input = Input(shape=(500,)) #model_2 = Conv1D(100, 10, activation='relu')(model_2_input) model_2 = Dense(64, activation='sigmoid')(model_2_input) model_2 = Dense(64, activation='sigmoid')(model_2) model_1_input = Input(shape=(None, 2048)) model_1 = LSTM(64, dropout_U = 0.2, dropout_W = 0.2, return_sequences=True)(model_1_input) model_1, state_h, state_c = LSTM(16, dropout_U = 0.2, dropout_W = 0.2, return_sequences=True,