classification

facial expression classification in real time using SVM

孤街醉人 提交于 2019-12-20 09:37:57
问题 I am currently working on a project where I have to extract the facial expression of a user (only one user at a time from a webcam) like sad or happy. My method for classifying facial expressions is: Use opencv to detect the face in the image Use ASM and stasm to get the facial feature point and now i'm trying to do facial expression classification is SVM a good option ? and if it is how can i start with SVM : how i'm going to train svm for every emotions using this landmarks ? 回答1: Yes, SVMs

Keras - Difference between categorical_accuracy and sparse_categorical_accuracy

时光总嘲笑我的痴心妄想 提交于 2019-12-20 09:11:15
问题 What is the difference between categorical_accuracy and sparse_categorical_accuracy in Keras? There is no hint in the documentation for these metrics, and by asking Dr. Google, I did not find answers for that either. The source code can be found here: def categorical_accuracy(y_true, y_pred): return K.cast(K.equal(K.argmax(y_true, axis=-1), K.argmax(y_pred, axis=-1)), K.floatx()) def sparse_categorical_accuracy(y_true, y_pred): return K.cast(K.equal(K.max(y_true, axis=-1), K.cast(K.argmax(y

Categorizing Words and Category Values

 ̄綄美尐妖づ 提交于 2019-12-20 08:50:05
问题 We were set an algorithm problem in class today, as a "if you figure out a solution you don't have to do this subject". SO of course, we all thought we will give it a go. Basically, we were provided a DB of 100 words and 10 categories. There is no match between either the words or the categories. So its basically a list of 100 words, and 10 categories. We have to "place" the words into the correct category - that is, we have to "figure out" how to put the words into the correct category. Thus

Higher validation accuracy, than training accurracy using Tensorflow and Keras

回眸只為那壹抹淺笑 提交于 2019-12-20 08:49:43
问题 I'm trying to use deep learning to predict income from 15 self reported attributes from a dating site. We're getting rather odd results, where our validation data is getting better accuracy and lower loss, than our training data. And this is consistent across different sizes of hidden layers. This is our model: for hl1 in [250, 200, 150, 100, 75, 50, 25, 15, 10, 7]: def baseline_model(): model = Sequential() model.add(Dense(hl1, input_dim=299, kernel_initializer='normal', activation='relu',

Extract tf-idf vectors with lucene

心已入冬 提交于 2019-12-20 08:39:13
问题 I have indexed a set of documents using lucene. I also have stored DocumentTermVector for each document content. I wrote a program and got the term frequency vector for each document, but how can I get tf-idf vector of each document? Here is my code that outputs term frequencies in each document: Directory dir = FSDirectory.open(new File(indexDir)); IndexReader ir = IndexReader.open(dir); for (int docNum=0; docNum<ir.numDocs(); docNum++) { System.out.println(ir.document(docNum).getField(

How to Interpret Predict Result of SVM in R?

假装没事ソ 提交于 2019-12-20 08:37:20
问题 I'm new to R and I'm using the e1071 package for SVM classification in R. I used the following code: data <- loadNumerical() model <- svm(data[,-ncol(data)], data[,ncol(data)], gamma=10) print(predict(model, data[c(1:20),-ncol(data)])) The loadNumerical is for loading data, and the data are of the form(first 8 columns are input and the last column is classification) : [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] 1 39 1 -1 43 -1 1 0 0.9050497 0 2 23 -1 -1 30 -1 -1 0 1.6624974 1 3 50 -1 -1 49 1

Classification is poor although term frequency is right

[亡魂溺海] 提交于 2019-12-20 07:19:14
问题 I am checking using the below function what are the most frequent words per category and then observe how some sentences would be classified. The results are surprisingly wrong: #The function def show_top10(classifier, vectorizer, categories): ... feature_names = np.asarray(vectorizer.get_feature_names()) ... for i, category in enumerate(categories): ... top10 = np.argsort(classifier.coef_[i])[-10:] ... print("%s: %s" % (category, " ".join(feature_names[top10]))) #Using the function on the

Hyperplane in SVM classifier

橙三吉。 提交于 2019-12-20 04:22:24
问题 I want to get a formula for hyperplane in SVM classifier, so I can calculate the probability of true classification for each sample according to distance from hyperplane. For simplicity imagine MATLAB's own example, load fisheriris xdata = meas(51:end,3:4); group = species(51:end); svmStruct = svmtrain(xdata,group,'showplot',true); Which gives, Where the hyperplane is a line and I want the formula for that. The hyperplane can also have a messy shape! What can I do? maybe there are other ways.

About image backgrounds while preparing training dataset for cascaded classifier

江枫思渺然 提交于 2019-12-20 04:16:17
问题 I have a question about preparing the dataset of positive samples for a cascaded classifier that will be used for object detection. As positive samples, I have been given 3 sets of images: a set of colored images in full size (about 1200x600) with a white background and with the object displayed at a different angles in each image another set with the same images in grayscale and with a white background, scaled down to the detection window size (60x60) another set with the same images in

expected first layer to have x dimensions but got an array with shape y

試著忘記壹切 提交于 2019-12-20 04:10:21
问题 (I am just starting tensorflow.js on node) I have been searching the web up and down for an answer. The confusion I have image data from image1 = tf.fromPixels(img) and I tried inputting it along with other image data to xs = tf.tensor([image1, image2]) . The confusion is no matter how I input a bunch of images into xs for model.fit , the program outputs errors described below. What I already tried When I run the program I get the error Error: Error when checking input: expected conv2d