I need to obtain a k-sized sample without replacement from a population, where each member of the population has a associated weight (W).
Numpy\'s rando
numpy is likely the best option. But here's another pure Python solution
for weighted samples without replacement.
There are a couple ways to define the purpose of the parameters for population and weights. population can be defined to represent the total population of items, and weights a list of biases that influence selection. For instance, in a horse race simulation, population could be the horses - each unique with a name, and weights their performance ratings. The functions below follow this model.
from random import random
from bisect import bisect_left
from itertools import accumulate
def wsample(population, weights, k=1):
wts = list(weights)
sampl = []
rnums = [random() for _ in range(k)]
for r in rnums:
acm_wts = list(accumulate(wts))
total = acm_wts[-1]
i = bisect_left(acm_wts, total * r)
p = population[i]
wts[i] = 0
sampl.append(p)
return sampl
Selected individuals are effectively removed from further selections by setting their weight to 0, and recalculating the accumulated weights. If using this, ensure k <= len(population).
The first version provides a good point of reference for testing this second version. The version below is very fast compared to the first.
In this next version, the accumulated weights are computed once, and collisions in the sampling incur retries. This has the effect of removing ranges from the possible selections, while the ranges that still haven't been taken hold bands relatively proportioned to the other active bands to keep the correct probabilities of selection in play.
A dictionary keyed on selected indices ensures each selected member is a unique individual. The dict retains the order the items are added and returns them in the order of selection.
The idea seems to work. The outcomes under testing compare very closely between these two implementations.
def wsample(population, weights, k=1):
accum = list(accumulate(weights))
total = accum[-1]
sampl = {}
while len(sampl) < k:
index = bisect_left(accum, total * random())
sampl[index] = population[index]
return list(sampl.values())
Despite the fact that the chances for extra looping more than k times are high (depending on the parameters) each selection, the elimination of the O(n) accumulate() operation each iteration more than makes up for it in faster execution times. This could be made even faster if it required the weights to be pre-accumulated, but for my application these need to be calculated each cycle once anyway.
To use this, one may want to put in a guard against infinite looping if it's possible in any application that uses it. And possibly put in a check or two to ensure the parameters are as expected for it to work.
In the tests below, the population consists of 10,000 items with the same corresponding randomly generated weights. This was run on a VM hosted on a computer over 10 years old - anyone can get better results than this, but it shows the relative speeds of the two approaches.
First version:
timeit.timeit("wsample(population, weights, k=5)", globals=globals(), number=10**4)
21.74719240899867
Second version:
timeit.timeit("wsample(population, weights, k=5)", globals=globals(), number=10**4)
4.32836378099455
Second version modified for weights pre-accumulated:
timeit.timeit("wsample(population, acm_weights, k=5)", globals=globals(), number=10**4)
0.05602245099726133