It must be noted here that I performed the mathematics by hand on paper to derive the foregoing proofs. I am not sure if the proofs would have become apparent by solely usin
TL;DR version
Unfortunately, the only way to improve performance of this algorithm is to get rid of it and use something better instead.
Are the "truths" obvious? (Spoiler alert: yes, they are)
In your long text I see two "truths" that you've found:
If you write arrays indices as strings and then reinterpret those strings as numbers for two consecutive permutations, the difference will a multiply of 9
The "graph of slopes" is symmetric
Unfortunately both of these facts are quite obvious and one of them is not even really true.
First fact is true as long the length of the array is less than 10. If it is more than 10 i.e. some indices are "mapped" to 2 characters, it stops being true. And it is obviously true if you know a divisibility rule for 9 (in decimal system): sum of digits should be a multiply of 9. Obviously if both of the numbers have the same digits they have the same reminder module 9 and thus their difference is a multiply of 9. Moreover, if you interpret your string in any system with base more than length of the array, the difference will be a multiply of base - 1 for the same reason. For example, let's use 8-based system (columns are: permutation, permutation index, indices string, indices string converted from 8-based to decimal, difference):
abc 0 012 10
acb 1 021 17 7
bac 2 102 66 49
bca 3 120 80 14
cab 4 201 129 49
cba 5 210 136 7
If you always use based of the system that is greater than the length of the array, this fact will be true (but you might need to come up with new digits)
The second statement is obvious as well and is direct consequences of how "lexicographic order" is defined. For every index i, if I sum indices array of the first i-th permutation and the last i-th permutation, the sum will always be the same: array with all values equal to the length of the array. Example:
1. abc 012 - cba 210 => 012 + 210 = 222
2. acb 021 - cab 201 => 021 + 201 = 222
3. bac 102 - bca 120 => 102 + 120 = 222
This is easy to see if you consider permutations of an array of negative indices i.e. [-N, -(N-1), ..., -1, 0]. Obviously i-th permutation from the start of this array is the same as i-th permutation of the [0, 1, 2, ... N] from the end with just negated signs.
Other questions
- Are there a mathematical relationships or patterns between the indexes as numbers, that is, for example, the set of numbers at graph 1, graph 2 or graph 3, derived from the input of any four values, for example
Yes, there are. Actually this is exactly the reason why the answer you linked in your question Permutations without recursive function call works in the first place. But I doubt there is an algorithm significantly more efficient than the one provided in that answer. Effectively that answer tries to convert the position of the requested permutation into a value in a variable-base numerical system with bases ranging from 1 to the length of the array. (For a more widespread example of a variable-base numerical system consider how you convert milliseconds into days-hours-minutes-seconds-milliseconds. You effectively use numerical system with bases 1000-60-60-24-unlimited. So when you see 12345 days 8 hours 58 minutes 15 seconds 246 milliseconds you convert it to milliseconds as ((((12345 * 24 + 8) * 60) + 58 * 60) + 15) * 1000 + 246 i.e. you treat that notation as 12345 (no base/unlimited) days 8 (24-base) hours 58 (60 base) minutes 15 (60 base) seconds 246 (1000-base) milliseconds).
With permutations there are two different tasks that you might need to do:
Generate i-th permutation. The algorithm you linked in the SO answer is reasonably efficient. And I doubt there is anything much better
Generate all permutations or a stream of permutations or next permutation for given one. This seems to be the one you are trying to do with your code. In this case simple algorithm that analyses given permutation, finds the first place where the permutations is not sorted, and does the switch + sorting is reasonably efficient (this is what procrastinator seems to implement but I didn't look into details). And again I doubt there is anything much better. On of major obstacles why numeric-based algorithm will not be more efficient is because for it to work in general case (i.e. length >= 10), you'll need to do divisions using long arithmetic in large bases and those operations are not O(1) anymore.
Update (answer to comment)
What I claim is that there is no way to calculate the sequence of numbers that would be more efficient than a direct calculation of the sequence of permutations.
I disagree as to that proposition. Can you show and prove as to that claim?
No, I don't even know how to state this claim in a formal way (how to define the class of algorithms that doesn't calculate that sequence of numbers?). Still I have some evidence to support this point.
First of all, you are probably not the smartest man in the known Universe, and this is relatively old and well known topic. Thus chances that you have discovered an algorithm that is much faster than existing ones are low. And the fact that nobody uses this techique is an evidence against you.
Another point is less arbitrary: the algorithm I suggested at #2 for generating all permutations in sequence is actually reasonably efficient and thus it will be hard to beat.
Consider some step to find next permutation. First you need to find the first position from the end where the order is not descending. Assume it would be k. It would take k comparisions to find it. Then you need to do one swap and sort. But if you are a bit smart, you might notice that "sort" here might be done much faster because the list is already sorted (but in reverse order). Thus sort here is just reverse with finding place for the k-th element. And taking into account that array is sorted, you can use binary search with O(log(k)) complexity. So you need to move k+1 elements in memory and less than k comparisions. Here is some code:
// generates array of all permutatations of array of integers [0, 1, .., n-1]
function permutations(n) {
// custom version that relies on a fact that all values are unique i.e. there will be no equality
var binarySearch = function (tgt, arr, start, end) {
// on small ranges direct loop might be more efficient than actual binary search
var SMALL_THRESHOLD = 5;
if (start - end < SMALL_THRESHOLD) {
for (var i = start; i <= end; i++) {
if (arr[i] > tgt)
return i;
}
throw new Error("Impossible");
}
else {
var left = start;
var right = end;
while (left < right) {
var middle = (left + right) >> 1; //safe /2
var middleV = arr[middle];
if (middleV < tgt) {
left = middle + 1;
}
else {
right = middle;
}
}
return left;
}
};
var state = [];
var allPerms = [];
var i, swapPos, swapTgtPos, half, tmp;
for (i = 0; i < n; i++)
state [i] = i
//console.log(JSON.stringify(state));
allPerms.push(state.slice()); // enfroce copy
if (n > 1) {
while (true) {
for (swapPos = n - 2; swapPos >= 0; swapPos--) {
if (state[swapPos] < state[swapPos + 1])
break;
}
if (swapPos < 0) // we reached the end
break;
// reverse end of the array
half = (n - swapPos) >> 1; // safe /2
for (i = 1; i < half + 1; i++) {
//swap [swapPos + i] <-> [n - i]
tmp = state[n - i];
state[n - i] = state[swapPos + i];
state[swapPos + i] = tmp;
}
// do the final swap
swapTgtPos = binarySearch(state[swapPos], state, swapPos + 1, n - 1);
tmp = state[swapTgtPos];
state[swapTgtPos] = state[swapPos];
state[swapPos] = tmp;
//console.log(JSON.stringify(state));
allPerms.push(state.slice()); // enfroce copy
}
}
//console.log("n = " + n + " count = " + allPerms.length);
return allPerms;
}
Now imagine that you do the same with your number-based approach and for a moment assume that you can calculate the number to add for each step instantly. So how much time do you use now? As you have to use long arithmetics and we known that highest digit that will be changed by your addition is k-th, you'll need to perform at least k additions and k comparisions for overflow. And of course you'll still have to do at least k writes to the memory. So to be more efficient than described above "usual" algorithm, you need a way to calculate a k-digits long number (the one you will add) in a time that takes less than to perform a binary search in array of size k. This sounds to me as a quite tough job. For example, multiplication of 9 (or rather N-1) by corresponding coefficient alone will probably take more time using long arithmetics.
So what other chances do you have? Don't use long arithmetics at all. In this case, the first obvious argument is that mathematically it makes little sense to compare alrgorithms performance on small N (and this is why Big-O notation is used for algorithms complexity). Still it might make sense to fight for performance of a "small" from the pure mathematics' point of view but "big" for real world cases in range up to permutation of array of 20 elements that still would fit into a long (64-bit) integer. So what you can gain by not using long arithmetics? Well your additions and multiplications will take only one CPU instruction. But then you'll have to use division to split your number back to digits and it will take N divisions and N checks (i.e. comparisions) on each step. And N is always greater than k, often much more. So this also doesn't look like a great avenue for performance improvements.
To sum up: suggested alogrithm is efficient and any arithmetics-based algorithm will probably less efficient in arithmetics part.