Converting a big integer to decimal string

后端 未结 4 778
情话喂你
情话喂你 2020-12-19 20:40

At the risk of having this question voted as a duplicate, or even to have it closed, I had this question has come up.

Background

In \"normal

4条回答
  •  执笔经年
    2020-12-19 20:54

    The accepted answer already provides you with a simple way to do this. That works fine and gives you a nice result. However, if you really need to convert large values to a string, there is a better way.

    I will not go into details, because my solution is written in Delphi, which many readers can't easily read, and it is pretty long (several functions in 100+ lines of code, using yet other functions, etc. which can not be explained in a simple answer, especially because the conversion handles some different number bases differently).

    But the principle is to divide the number into two almost equal size halves, by a number which is a power of 10. To convert these, recursivley cut them in two smaller parts again, by a smaller power of 10, etc. until the size of the parts reaches some kind of lower limit (say, 32 bit), which you then finally convert the conventional way, i.e. like in the accepted answer.

    The partial conversions are then "concatenated" (actually, the digits are placed into the single buffer at the correct address directly), so at the end, you get one huge string of digits.

    This is a bit tricky, and I only mention it for those who want to investigate this for extremely large numbers. It doesn't make sense for numbers with fewer than, say, 100 digits.

    This is a recursive method, indeed, but not one that simply divides by 10.

    The size of the buffer can be precalculated, by doing something like

    bufSize = myBigInt.bitCount() * Math.log10(2) + some_extra_to_be_sure;
    

    I use a precalculated table for the different number bases, but that is an implementation detail.

    For very large numbers, this will be much faster than a loop that repeatedly divides by 10, especially since that way, the entire number must be divided by 10 all the time, and it only gets smaller very slowly. The divide-and-conquer algorithm only divides ever smaller numbers, and the total number of (costly) divisions to cut the parts is far lower (log N instead of N, is my guess). So fewer divisions on (on the average) much smaller numbers.

    cf. Brent, Zimmermann, "Modern Computer Arithmetic", algorithm 1.26

    My code and explanations can be found here, if you want to see how it works: BigIntegers unit

提交回复
热议问题