I have a set of tasks, let\'s call it T[]
, where each task T[i]
needs a certain amount of time t(T[i])
to be processed. The tasks are bein
Simulating the test run is the solution when the execution order is (roughly) deterministic. I just took my real processing code, and replaced it with a simple Thread.sleep, while sleeping for the time the task is expected to take to process (just interpreted as milliseconds to scale it down). In the end, I just scaled up the time it took again and the result is quite good. I ran it with nearly 100 tasks with vastly different execution times, on 5 threads. It estimated 1 hour 39 minutes, and the real run was off only by 3 minutes.
long startSim = currentTimeMillis();
List taskTimes = parallelTests.getRuntimesForAllTests(); // ordered from longest time
ThreadPoolExecutor simulationExecutor = (ThreadPoolExecutor) Executors.newFixedThreadPool(threadCount);
taskTimes.forEach(taskTime -> simulationExecutor.submit(() -> {
try {
Thread.sleep(taskTime); // this is really seconds, but we just take it as milliseconds
} catch (InterruptedException e) {
e.printStackTrace();
}
}));
simulationExecutor.shutdown();
simulationExecutor.awaitTermination(1, MINUTES);
long stopSim = currentTimeMillis();
long timeNeeded = stopSim - startSim;
// now just multiply it *1000 to scale it up to seconds again, and that's your result