Given any number n, and three operations on n:
I want to find the minimum n
In summary:
Repeat these operations on n until you reach 1, counting the number of operations performed. This is guaranteed to give the correct answer.
As an alternative to the proof from @trincot, here is one that has less cases and is hopefully more clear:
Proof:
Case 1: n is even
Let y be the value of the number after performing some operations on it. To start, y = n.
Case 2: n is odd
The goal here is to show that when faced with an odd n, either adding or subtracting will result in less operations to reach a given state. We can use that fact that dividing is optimal when faced with an even number.
We will represent n with a partial bitstring showing the least significant bits: X1, or X01, etc, where X represents the remaining bits, and is nonzero. When X is 0, the correct answers are clear: for 1, you're done; for 2 (0b10), divide; for 3 (0b11), subtract and divide.
Attempt 1: Check whether adding or subtracting is better with one bit of information:
We reach an impass: if X or X+1 were even, the optimal move would be to divide. But we don't know if X or X+1 are even, so we can't continue.
Attempt 2: Check whether adding or subtracting is better with two bits of information:
Conclusion: for X01, subtracting will result in at least as few operations as adding: 3 and 4 operations versus 4 and 4 operations to reach X and X+1.
Conclusion: for X11, adding will result in at least as few operations as subtracting: 3 and 4 operations versus 4 and 4 operations to reach X+1 and X.
Thus, if n's least significant bits are 01, subtract. If n's least significant bits are 11, add.
To solve the above problem you can either use recursion or loops A recursive answer is already provided so i would try to give a while loop approach.
Logic: We should remember that the number multiple of 2 would always have less set bits than those which are not divisible by 2.
To solve your problem i'm using java code. I have tried it with few numbers and it works fine, if it doesn't add a comment or edit the answer
while(n!=1)
{
steps++;
if(n%2 == 0)
{
n=n/2;
}
else
{
if(Integer.bitCount(n-1) > Integer.bitCount(n+1))
{
n += 1;
}
else
{
n -=1;
}
}
}
System.out.println(steps);
The code is written in a very simple form so that it could be understood by everyone. Here n is the number entered and steps are the steps required to reach 1
The solution offered by Ami Tavoy works if the 3 is considered (adding to 4 would produce 0b100
and count_to_1
equals 2 which is greater than subtracting to 2 for 0b10
and count_to_1
equals 1). You can add two steps when we get down no n = 3 to finish off the solution:
def min_steps_back(n):
count_to_1 = lambda x: bin(x)[:: -1].index('1')
if n in [0, 1]:
return 1 - n
if n == 3:
return 2
if n % 2 == 0:
return 1 + min_steps_back(n / 2)
return 1 + (min_steps_back(n + 1) if count_to_1(n + 1) > count_to_1(n - 1) else min_steps_back(n - 1))
Sorry, I know would make a better comment, but I just started.
I am really bad at binaries so not counting the lsb or msb. What about below program -
public class ReduceNto1 {
public static void main(String[] args) {
int count1 = count(59);//input number
System.out.println("total min steps - " + count1);
}
static int count(int n){
System.out.println(n + " > ");
if(n==1){
return 0;
}
else if(n %2 ==0){
return 1 + count(n/2);
}else{
return 1 + Math.min(count(n-1), count(n+1));
}
}
}
There is a pattern which allows you to know the optimal next step in constant time. In fact, there can be cases where there are two equally optimal choices -- in that case one of them can be derived in constant time.
If you look at the binary representation of n, and its least significant bits, you can make some conclusions about which operation is leading to the solution. In short:
If the least significant bit is zero, the next operation should be the division by 2. We could instead try 2 additions and then a division, but then that same result can be achieved in two steps: divide and add. Similarly with 2 subtractions. And of course, we can ignore the useless subsequent add & subtract steps (or vice versa). So if the final bit is 0, division is the way to go.
Then the remaining 3-bit patterns are like **1
. There are four of them. Let's write a011
to denote a number that ends with bits 011
and has a set of prefixed bits that would represent the value a:
a001
: adding one would give a010
, after which a division should occur: a01
: 2 steps taken. We would not want to subtract one now, because that would lead to a00
, which we could have arrived at in two steps from the start (subtract 1 and divide). So again we add and divide to get a1
, and for the same reason we repeat that again, giving: a+1
. This took 6 steps, but leads to a number that could be arrived at in 5 steps (subtract 1, divide 3 times, add 1), so clearly, we should not perform the addition. Subtraction is always better.
a111
: addition is equal or better than subtraction. In 4 steps we get a+1
. Subtraction and division would give a11
. Adding now would be inefficient compared to the initial addition path, so we repeat this subtract/divide twice and get a
in 6 steps. If a
ends in 0, then we could have done this in 5 steps (add, divide three times, subtract), if a
ends in a 1, then even in 4. So Addition is always better.
a101
: subtraction and double division leads to a1
in 3 steps. Addition and division leads to a11
. To now subtract and divide would be inefficient, compared to the subtraction path, so we add and divide twice to get a+1
in 5 steps. But with the subtraction path, we could reach this in 4 steps. So subtraction is always better.
a011
: addition and double division leads to a1
. To get a
would take 2 more steps (5), to get a+1
: one more (6). Subtraction, division, subtraction, double division leads to a
(5), to get a+1
would take one more step (6). So addition is at least as good as subtraction. There is however one case not to overlook: if a is 0, then the subtraction path reaches the solution half-way, in 2 steps, while the addition path takes 3 steps. So addition is always leading to the solution, except when n is 3: then subtraction should be chosen.
So for odd numbers the second-last bit determines the next step (except for 3).
This leads to the following algorithm (Python), which needs one iteration for each step and should thus have O(logn) complexity:
def stepCount(n):
count = 0
while n > 1:
if n % 2 == 0: # bitmask: *0
n = n // 2
elif n == 3 or n % 4 == 1: # bitmask: 01
n = n - 1
else: # bitmask: 11
n = n + 1
count += 1
return count
See it run on repl.it.
Here is a version where you can input a value for n and let the snippet produce the number of steps:
function stepCount(n) {
var count = 0
while (n > 1) {
if (n % 2 == 0) // bitmask: *0
n = n / 2
else if (n == 3 || n % 4 == 1) // bitmask: 01
n = n - 1
else // bitmask: 11
n = n + 1
count += 1
}
return count
}
// I/O
var input = document.getElementById('input')
var output = document.getElementById('output')
var calc = document.getElementById('calc')
calc.onclick = function () {
var n = +input.value
if (n > 9007199254740991) { // 2^53-1
alert('Number too large for JavaScript')
} else {
var res = stepCount(n)
output.textContent = res
}
}
<input id="input" value="123549811245">
<button id="calc">Caluclate steps</button><br>
Result: <span id="output"></span>
Please be aware that the accuracy of JavaScript is limited to around 1016, so results will be wrong for bigger numbers. Use the Python script instead to get accurate results.
I like the idea by squeamish ossifrage of greedily looking (for the case of odd numbers) whether n + 1 or n - 1 looks more promising, but think deciding what looks more promising can be done a bit better than looking at the total number of set bits.
For a number x
,
bin(x)[:: -1].index('1')
indicates the number of least-significant 0s until the first 1. The idea, then, is to see whether this number is higher for n + 1 or n - 1, and choose the higher of the two (many consecutive least-significant 0s indicate more consecutive halving).
This leads to
def min_steps_back(n):
count_to_1 = lambda x: bin(x)[:: -1].index('1')
if n in [0, 1]:
return 1 - n
if n % 2 == 0:
return 1 + min_steps_back(n / 2)
return 1 + (min_steps_back(n + 1) if count_to_1(n + 1) > count_to_1(n - 1) else min_steps_back(n - 1))
To compare the two, I ran
num = 10000
ms, msb = 0., 0.
for i in range(1000):
n = random.randint(1, 99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999)
ms += min_steps(n)
msb += min_steps_back(n)
print ms / num, msb / num
Which outputs
57.4797 56.5844
showing that, on average, this does use fewer operations (albeit not by that much).