I use in .Net project mono implementation of BigInteger (link) In Java I use java.math.BigInteger.
The same code produces different results in Java.
.Net cod
I solved my problem by adding 0 bit at the begining of inputBytes.
From the docs for java.math.BigInteger(byte[]):
Translates a byte array containing the two's-complement binary representation of a BigInteger into a BigInteger. The input array is assumed to be in big-endian byte-order: the most significant byte is in the zeroth element.
From the docs for System.Numerics.BigInteger(byte[]):
The individual bytes in the value array should be in little-endian order, from lowest-order byte to highest-order byte.
So you might want to just try reversing the input bytes for one of the values you've got - it's not clear which set you should reverse, as we don't know what values you're trying to represent. I would suggest adding diagnostics of just printing out the normal decimal representation immediately after construction in each case - if those aren't the same, the rest of the code is irrelevant.