What does `sample_weight` do to the way a `DecisionTreeClassifier` works in sklearn?
I've read from this documentation that : "Class balancing can be done by sampling an equal number of samples from each class, or preferably by normalizing the sum of the sample weights (sample_weight) for each class to the same value." But, it is still unclear to me how this works. If I set sample_weight with an array of only two possible values, 1 's and 2 's, does this mean that the samples with 2 's will get sampled twice as often as the samples with 1 's when doing the bagging? I cannot think of a practical example for this. Matt Hancock So I spent a little time looking at the sklearn