问题
I've wondered if there is a function in sklearn which corresponds to the accuracy(difference between actual and predicted data) and how to print it out?
from sklearn import datasets
iris = datasets.load_iris()
from sklearn.naive_bayes import GaussianNB
naive_classifier= GaussianNB()
y =naive_classifier.fit(iris.data, iris.target).predict(iris.data)
pr=naive_classifier.predict(iris.data)
回答1:
Most classifiers in scikit have an inbuilt score() function, in which you can input your X_test and y_test and it will output the appropriate metric for that estimator. For classification estimators it is mostly 'mean accuracy'.
Also sklearn.metrics have many functions available which will output different metrics like accuracy, precision, recall etc.
For your specific question you need accuracy_score
from sklearn.metrics import accuracy_score
score = accuracy_score(iris.target, pr)
回答2:
You can use accuracy_score, find documentation here.
Implement like this -
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(prediction, labels_test)
This will return a float value. The float value describes (number of points classified correctly) / (total number of points in your test set)
回答3:
For Classification problems use "metrics.accuracy_score" and Regression use "metrics.r2_score".
回答4:
You have to import accuracy_score from sklearn.metrics. It should be like below, from sklearn.metrics import accuracy_score print accuracy_score(predictions,test set of labels)
The formula for accuracy is Number of points classified correctly/all the points in test set
来源:https://stackoverflow.com/questions/42471082/how-to-find-out-the-accuracy