classification

Keras model doesn't learn at all

馋奶兔 提交于 2019-12-11 19:50:16
问题 My model weights (I output them to weights_before.txt and weights_after.txt ) are precisely the same before and after the training, i.e. the training doesn't change anything, there's no fitting happening. My data look like this (I basically want the model to predict the sign of feature, result is 0 if feature is negative, 1 if positive ): ,feature,zerosColumn,result 0,-5,0,0 1,5,0,1 2,-3,0,0 3,5,0,1 4,3,0,1 5,3,0,1 6,-3,0,0 ... Brief summary of my approach: Load the data. Split it column-wise

LIBSVM : Same Labels for all tests?

≯℡__Kan透↙ 提交于 2019-12-11 19:32:24
问题 I have 500 feature vectors and everyone is labelled in range 1-6 for 6 classes. I'm storing them in a mat file which looks like: m=load('TrainingData') m = Labels: [1x500 double] % labels for each class in 1-6 Data: [500x20 double] % Vectors According to some post on forum, I passed vectors in sparse format TrData=sparse(m.Data); Then I have passed this data to train svm using LIBSVM : svm_model=svmtrain(Labels,TrData,'-c 1 -g 0.2 -b 1'); The generated model look like: svm_model = Parameters:

Classification accuracy based on single Feature set

五迷三道 提交于 2019-12-11 18:32:58
问题 I am trying to classify data based on prespecified labels. Got two columns and shown below: room_class room_cluster Standard single sea view Standard Deluxe twin Single Deluxe Suite Superior room ocean view Suite Superior Double twin Superior Deluxe Double room Deluxe As seen above room_cluster in the set of labels. The code snippet is as follows: le = preprocessing.LabelEncoder() datar = df #### Separate data into feature and Labels x = datar.room_class y = datar.room_cluster #### Using

Confusion matrix ,my argmax which convert predictions classes to one hot vectors does not work

随声附和 提交于 2019-12-11 17:56:11
问题 I am using Convolutional Neural networks for classification and watched the video in youtube this site [which explained confusion matrix and how to pş1 and I aslo used according the youtube is explained the codes is : `import seaborn as sns # Predict the values from the validation dataset Y_pred = model.predict(X_test) # Convert predictions classes to one hot vectors Y_pred_classes = np.argmax(Y_pred,axis = 1) # Convert validation observations to one hot vectors print(Y_pred_classes) Y_true =

Same output of the Keras model

旧巷老猫 提交于 2019-12-11 16:59:10
问题 I have a Keras model for predicting moves in the game. I have an input shape of (160,120 ,1) . I have the following model with an output of 9 nodes: from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Conv2D, MaxPooling2D, ZeroPadding2D from keras.layers.normalization import BatchNormalization from keras.optimizers import Adam from keras.regularizers import l2 from keras import optimizers def alexnet

inception version 4 model, saving and loading the model after retaining

北慕城南 提交于 2019-12-11 15:48:46
问题 I am kind of new to deep learning and trying to do something related to image classification. I have managed to retrain inception_v4 model by modifying the retrain.py code that was provided for the inception_v3 model. Getting a prediction in the testing stage of the same file works, but when I am trying to get a prediction from it after loading the model from a saved graph def, there are some errors in input image. Can some help show me how to save and load the inception v4 model after

ResNet for Binary classification- Just 2 values of cross-validation accuracy

随声附和 提交于 2019-12-11 15:32:28
问题 I am new to python and Keras. I am trying to do a binary classification using transfer learning from ResNet. My dataset is very small but I am using image augmentation. My cross-validation accuracy is just either of 2 values 0.3442 and 0.6558 for all images. Can anyone tell me why this happens? Also when I predict (0 or 1), it labels all images as one class(0). Here is my code: from keras.preprocessing.image import ImageDataGenerator, load_img from keras.models import Sequential,Model,load

python svm function with huber loss

假装没事ソ 提交于 2019-12-11 15:30:10
问题 I need a svm classifier of python with huber loss function. But its default loss function is hinge loss. Do you know how can I assign loss function to python svm? svc = svm.SVC(kernel='linear', C=1, gamma=1).fit(data, label) 回答1: There is really no such thing as "SVM with huber loss", as SVM is literally a linear (or kernelized) model trained with hinge loss. If you change the loss - it stops being SVM. Consequently libraries do not have a loss parameter, as changing it does not apply to the

Features' value in KDD99 data set was wrong?

淺唱寂寞╮ 提交于 2019-12-11 15:27:49
问题 In KDD99 data set, a huge number of connections 32nd and 33rd feature’s value is greater than 100. I can’t understand the reason why used a connection window of 100 connections can get a value which is greater than 100? I consulted a lot of information, but found nothing. 回答1: The dataset contains 41 features for each connection. These features were obtained preprocessing TCP dump files. To do so, packet information in the TCP dump file was summarized into connections. Specifically (http:/

The confidence level of each specific instance in WEKA?

六月ゝ 毕业季﹏ 提交于 2019-12-11 15:04:12
问题 I'm new to WEKA and machine learning in general. I have a test set with about 6500 instances. I have a model that has already been trained with a training set. Once I run the test set through the saved model, is there a way I can extract the confidence level of each specific instance? By confidence level, I mean a numerical value that expresses the probability that the classifier has classified a specific instance correctly. I want this confidence number for each instance in the file. Is