I am new to both python and scikit-learn so please bear with me.
I took this source code for k means clustering algorithm from k means clustering.
I then modif
Forget about the Bunch object. It's just an implementation detail to load the toy datasets that are bundled with scikit-learn.
In real life, with you real data you just have to call directly:
km = KMeans(n_clusters).fit(my_document_features)
then collect cluster assignments from:
km.labels_
my_document_features is a 2D datastructure: either a numpy array or a scipy.sparse matrix with shape (n_documents, n_features).
km.labels_ is a 1D numpy array with shape (n_documents,). Hence the first element in labels_ is the index of the cluster of the document described in the first row of the my_document_features feature matrix.
Typically you would build my_document_features with a TfidfVectorizer object:
my_document_features = TfidfVectorizer().fit_transform(my_text_documents)
and my_text_documents would a either a list python unicode objects if you read the documents directly (e.g. from a database or rows from a single CSV file or whatever you want) or alternatively:
vec = TfidfVectorizer(input='filename')
my_document_features = vec.fit_transform(my_text_files)
where my_text_files is a python list of the path of your document files on your harddrive (assuming they are encoded using the UTF-8 encoding).
The length of the my_text_files or my_text_documents lists should be n_documents hence the mapping with km.labels_ is direct.
As scikit-learn is not just for clustering or categorizing documents, we use the name "sample" instead of "document". This is way you will see the we use n_samples instead of n_documents to document the expected shapes of the arguments and attributes of all the estimator in the library.