machine-learning

Identifying points by color

ぐ巨炮叔叔 提交于 2021-02-12 11:40:21
问题 I am following the tutorial over here : https://www.rpubs.com/loveb/som . This tutorial shows how to use the Kohonen Network (also called SOM, a type of machine learning algorithm) on the iris data. I ran this code from the tutorial: library(kohonen) #fitting SOMs library(ggplot2) #plots library(GGally) #plots library(RColorBrewer) #colors, using predefined palettes iris_complete <-iris[complete.cases(iris),] iris_unique <- unique(iris_complete) # Remove duplicates #scale data iris.sc = scale

Identifying points by color

不想你离开。 提交于 2021-02-12 11:39:41
问题 I am following the tutorial over here : https://www.rpubs.com/loveb/som . This tutorial shows how to use the Kohonen Network (also called SOM, a type of machine learning algorithm) on the iris data. I ran this code from the tutorial: library(kohonen) #fitting SOMs library(ggplot2) #plots library(GGally) #plots library(RColorBrewer) #colors, using predefined palettes iris_complete <-iris[complete.cases(iris),] iris_unique <- unique(iris_complete) # Remove duplicates #scale data iris.sc = scale

How to fix flatlined accuracy and NaN loss in tensorflow image classification

心已入冬 提交于 2021-02-11 18:05:34
问题 I am currently experimenting with TensorFlow and machine learning, and as a challenge, I decided to try and code a machine learning software, on the Kaggle website, that can analyze brain MRI scans and predict if a tumour exists or not. I did so with the code below and began training the model. However, the text that showed up during training showed that none of the loss values (training or validation) had proper values and that the accuracies flatlined, or fluctuated between two numbers (the

Keras model summary incorrect

删除回忆录丶 提交于 2021-02-11 15:51:05
问题 I am doing data augmentation using data_gen=image.ImageDataGenerator(rotation_range=20,width_shift_range=0.2,height_shift_range=0.2, zoom_range=0.15,horizontal_flip=False) iter=data_gen.flow(X_train,Y_train,batch_size=64) data_gen.flow() needs a rank 4 data matrix, so the shape of X_train is (60000, 28, 28, 1) . We need to pass the same shape i.e (60000, 28, 28, 1) while defining the architecture of the model as follows; model=Sequential() model.add(Dense(units=64,activation='relu',kernel

Keras model summary incorrect

﹥>﹥吖頭↗ 提交于 2021-02-11 15:48:44
问题 I am doing data augmentation using data_gen=image.ImageDataGenerator(rotation_range=20,width_shift_range=0.2,height_shift_range=0.2, zoom_range=0.15,horizontal_flip=False) iter=data_gen.flow(X_train,Y_train,batch_size=64) data_gen.flow() needs a rank 4 data matrix, so the shape of X_train is (60000, 28, 28, 1) . We need to pass the same shape i.e (60000, 28, 28, 1) while defining the architecture of the model as follows; model=Sequential() model.add(Dense(units=64,activation='relu',kernel

confusion matrix and classification report of StratifiedKFold

折月煮酒 提交于 2021-02-11 15:33:03
问题 I am using StratifiedKFold to checking the performance of my classifier. I have two classes and I trying to build Logistic Regression classier. Here is my code skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=0) for train_index, test_index in skf.split(x, y): x_train, x_test = x[train_index], x[test_index] y_train, y_test = y[train_index], y[test_index] tfidf = TfidfVectorizer() x_train = tfidf.fit_transform(x_train) x_test = tfidf.transform(x_test) clf = LogisticRegression(class

confusion matrix and classification report of StratifiedKFold

独自空忆成欢 提交于 2021-02-11 15:32:06
问题 I am using StratifiedKFold to checking the performance of my classifier. I have two classes and I trying to build Logistic Regression classier. Here is my code skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=0) for train_index, test_index in skf.split(x, y): x_train, x_test = x[train_index], x[test_index] y_train, y_test = y[train_index], y[test_index] tfidf = TfidfVectorizer() x_train = tfidf.fit_transform(x_train) x_test = tfidf.transform(x_test) clf = LogisticRegression(class

Confusion Matrix : Shuffle vs Non-Shuffle

不羁岁月 提交于 2021-02-11 15:23:15
问题 Here is the config of my model : "model": { "loss": "categorical_crossentropy", "optimizer": "adam", "layers": [ { "type": "lstm", "neurons": 180, "input_timesteps": 15, "input_dim": 103, "return_seq": true, "activation": "relu" }, { "type": "dropout", "rate": 0.1 }, { "type": "lstm", "neurons": 100, "activation": "relu", "return_seq": false }, { "type": "dropout", "rate": 0.1 }, { "type": "dense", "neurons": 30, "activation": "relu" }, { "type": "dense", "neurons": 3, "activation": "softmax"

Output of Keras predict method has the wrong shape when using Google Colab's tpu strategy

ぃ、小莉子 提交于 2021-02-11 15:14:34
问题 I made the following architecture Layer (type) Output Shape Param # ================================================================= embedding_7 (Embedding) (None, 50, 64) 512000 _________________________________________________________________ bidirectional_5 (Bidirection (None, 200) 132000 _________________________________________________________________ dense_9 (Dense) (None, 1) 201 ================================================================= Total params: 644,201 Trainable params:

Google BERT and antonym detection

旧巷老猫 提交于 2021-02-11 15:10:55
问题 I recently learned about the following phenomenon: Google BERT word embeddings of well-known state-of-the-art models seem to ignore the measure of semantical contrast between antonyms in terms of the natural distance(norm2 or cosine distance) between the corresponding embeddings. For example: The measure is the "cosine distance" (as oppose to the "cosine similarity"), that means closer vectors are supposed to have smaller distance between them. As one can see, BERT states "weak" and "powerful