classification

Construction of confusion matrix

此生再无相见时 提交于 2020-05-15 21:20:06
问题 I have a question concerning the construction of confusion matrix from the below link: Ranger Predicted Class Probability of each row in a data frame If I have the following code for example (as explained by the answer in the link): library(ranger) library(caret) idx = sample(nrow(iris),100) data = iris data$Species = factor(ifelse(data$Species=="versicolor",1,0)) Train_Set = data[idx,] Test_Set = data[-idx,] mdl <- ranger(Species ~ ., ,data=Train_Set,importance="impurity", save.memory = TRUE

Class weights in binary classification model with Keras

空扰寡人 提交于 2020-05-14 14:47:24
问题 We know that we can pass a class weights dictionary in the fit method for imbalanced data in binary classification model. My question is that, when using only 1 node in the output layer with sigmoid activation, can we still apply the class weights during the training? model = Sequential() model.add(Dense(64, activation='tanh',input_shape=(len(x_train[0]),))) model.add(Dense(1, activation='sigmoid')) model.compile( optimizer=optimizer, loss=loss, metrics=metrics) model.fit( x_train, y_train,

Find Distance to Decision Boundary in Decision Trees

扶醉桌前 提交于 2020-05-08 16:07:56
问题 I want to find the distance of samples to the decision boundary of a trained decision trees classifier in scikit-learn. The features are all numeric and the feature space could be of any size. I have this visualization so far for an example 2D case based on here: import numpy as np import matplotlib.pyplot as plt from sklearn.tree import DecisionTreeClassifier from sklearn.datasets import make_moons # Generate some example data X, y = make_moons(noise=0.3, random_state=0) # Train the

Neural network (perceptron) - visualizing decision boundary (as a hyperplane) when performing binary classification

我是研究僧i 提交于 2020-05-07 08:35:15
问题 I would like to visualize the decision boundary for a simple neural network with only one neuron (3 inputs, binary output). I'm extracting the weights from a Keras NN model and then attempting to draw the surface plane using matplotlib. Unfortunately, the hyperplane is not appearing between the points on the scatter plot, but instead is displaying underneath all the data points (see output image). I am calculating the z-axis of the hyperplane using the equation z = (d - ax - by) / c for a

Neural network (perceptron) - visualizing decision boundary (as a hyperplane) when performing binary classification

╄→гoц情女王★ 提交于 2020-05-07 08:33:51
问题 I would like to visualize the decision boundary for a simple neural network with only one neuron (3 inputs, binary output). I'm extracting the weights from a Keras NN model and then attempting to draw the surface plane using matplotlib. Unfortunately, the hyperplane is not appearing between the points on the scatter plot, but instead is displaying underneath all the data points (see output image). I am calculating the z-axis of the hyperplane using the equation z = (d - ax - by) / c for a

Neural network (perceptron) - visualizing decision boundary (as a hyperplane) when performing binary classification

天涯浪子 提交于 2020-05-07 08:33:16
问题 I would like to visualize the decision boundary for a simple neural network with only one neuron (3 inputs, binary output). I'm extracting the weights from a Keras NN model and then attempting to draw the surface plane using matplotlib. Unfortunately, the hyperplane is not appearing between the points on the scatter plot, but instead is displaying underneath all the data points (see output image). I am calculating the z-axis of the hyperplane using the equation z = (d - ax - by) / c for a

poor accuracy score on classfication problem [closed]

醉酒当歌 提交于 2020-05-01 09:50:26
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 8 days ago . I'm trying to build a classification model and my target is not binary . The correlations of my features against my target are all weak (mostly 0.1). I have preprocessed my data and applied the all the algorithms i used to it (the algorithms i used are svm, knn, naivebayes,logistic

poor accuracy score on classfication problem [closed]

岁酱吖の 提交于 2020-05-01 09:49:32
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 8 days ago . I'm trying to build a classification model and my target is not binary . The correlations of my features against my target are all weak (mostly 0.1). I have preprocessed my data and applied the all the algorithms i used to it (the algorithms i used are svm, knn, naivebayes,logistic

Comparing AUC, log loss and accuracy scores between models

﹥>﹥吖頭↗ 提交于 2020-04-16 05:08:05
问题 I have the following evaluation metrics on the test set , after running 6 models for a binary classification problem : accuracy logloss AUC 1 19% 0.45 0.54 2 67% 0.62 0.67 3 66% 0.63 0.68 4 67% 0.62 0.66 5 63% 0.61 0.66 6 65% 0.68 0.42 I have the following questions: How can model 1 be the best in terms of logloss (the logloss is the closest to 0) since it performs the worst (in terms of accuracy ). What does that mean ? How come does model 6 have lower AUC score than e.g. model 5 , when

How to handle LSTMs with many features in python?

醉酒当歌 提交于 2020-04-16 04:23:19
问题 I have a binary classification problem. I use the following keras model to do my classification. input1 = Input(shape=(25,6)) x1 = LSTM(200)(input1) input2 = Input(shape=(24,6)) x2 = LSTM(200)(input2) input3 = Input(shape=(21,6)) x3 = LSTM(200)(input3) input4 = Input(shape=(20,6)) x4 = LSTM(200)(input4) x = concatenate([x1,x2,x3,x4]) x = Dropout(0.2)(x) x = Dense(200)(x) x = Dropout(0.2)(x) output = Dense(1, activation='sigmoid')(x) However, the results I get is extremely bad. I thought the