How to get flat clustering corresponding to color clusters in the dendrogram created by scipy

前端 未结 4 1006
时光说笑
时光说笑 2020-12-03 00:03

Using the code posted here, I created a nice hierarchical clustering:

\"scipy

Let\'s say t

相关标签:
4条回答
  • 2020-12-03 00:14

    I wrote some code to decondense the linkage matrix. It returns a dictionary containing the indexes of labels that are grouped by each agglomeration step. I've only tried it out on the results of the complete linkage clusters. The keys of the dict start at len(labels)+1 because initially, each label is treated as its own cluster. This may answer your question.

    import pandas as pd
    import numpy as np
    from scipy.cluster.hierarchy import linkage
    
    np.random.seed(123)
    labels = ['ID_0','ID_1','ID_2','ID_3','ID_4']
    
    X = np.corrcoef(np.random.random_sample([5,3])*10)
    row_clusters = linkage(x_corr, method='complete')    
    
    def extract_levels(row_clusters, labels):
        clusters = {}
        for row in xrange(row_clusters.shape[0]):
            cluster_n = row + len(labels)
            # which clusters / labels are present in this row
            glob1, glob2 = row_clusters[row, 0], row_clusters[row, 1]
    
            # if this is a cluster, pull the cluster
            this_clust = []
            for glob in [glob1, glob2]:
                if glob > (len(labels)-1):
                    this_clust += clusters[glob]
                # if it isn't, add the label to this cluster
                else:
                    this_clust.append(glob)
    
            clusters[cluster_n] = this_clust
        return clusters
    

    Returns:

    {5: [0.0, 2.0],
     6: [3.0, 4.0],
     7: [1.0, 0.0, 2.0],
     8: [3.0, 4.0, 1.0, 0.0, 2.0]}
    
    0 讨论(0)
  • 2020-12-03 00:21

    I know this is very late to the game, but I made a plotting object based on the code from the post here. It's registered on pip, so to install you just have to call

    pip install pydendroheatmap
    

    check out the project's github page here : https://github.com/themantalope/pydendroheatmap

    0 讨论(0)
  • 2020-12-03 00:31

    I think you're on the right track. Let's try this:

    import scipy
    import scipy.cluster.hierarchy as sch
    X = scipy.randn(100, 2)     # 100 2-dimensional observations
    d = sch.distance.pdist(X)   # vector of (100 choose 2) pairwise distances
    L = sch.linkage(d, method='complete')
    ind = sch.fcluster(L, 0.5*d.max(), 'distance')
    

    ind will give you cluster indices for each of the 100 input observations. ind depends on what method you used in linkage. Try method=single, complete, and average. Then note how ind differs.

    Example:

    In [59]: L = sch.linkage(d, method='complete')
    
    In [60]: sch.fcluster(L, 0.5*d.max(), 'distance')
    Out[60]: 
    array([5, 4, 2, 2, 5, 5, 1, 5, 5, 2, 5, 2, 5, 5, 1, 1, 5, 5, 4, 2, 5, 2, 5,
           2, 5, 3, 5, 3, 5, 5, 5, 5, 5, 5, 5, 2, 2, 5, 5, 4, 1, 4, 5, 2, 1, 4,
           2, 4, 2, 2, 5, 5, 5, 2, 5, 5, 3, 5, 5, 4, 5, 4, 5, 3, 5, 3, 5, 5, 5,
           2, 3, 5, 5, 4, 5, 5, 2, 2, 5, 2, 2, 4, 1, 2, 1, 5, 2, 5, 5, 5, 1, 5,
           4, 2, 4, 5, 2, 4, 4, 2])
    
    In [61]: L = sch.linkage(d, method='single')
    
    In [62]: sch.fcluster(L, 0.5*d.max(), 'distance')
    Out[62]: 
    array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
           1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
           1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
           1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
           1, 1, 1, 1, 1, 1, 1, 1])
    

    scipy.cluster.hierarchy sure is confusing. In your link, I don't even recognize my own code!

    0 讨论(0)
  • 2020-12-03 00:33

    You can also try cut_tree, it has a height parameter that should give you what you want for ultrametrics.

    0 讨论(0)
提交回复
热议问题