Why is 0 divided by 0 an error?

前端 未结 18 1086
耶瑟儿~
耶瑟儿~ 2021-02-05 07:20

I have come across this problem in a calculation I do in my code, where the divisor is 0 if the divident is 0 too. In my code I return 0 for that case. I am wondering, while div

18条回答
  •  無奈伤痛
    2021-02-05 07:59

    The problem is with the denominator. The numerator is effectively irrelevant.

    10 / n
    10 / 1 = 10
    10 / 0.1 = 100
    10 / 0.001 = 1,000
    10 / 0.0001 = 10,000
    Therefore: 10 / 0 = infinity (in the limit as n reaches 0)
    

    The Pattern is that as n gets smaller, the results gets bigger. At n = 0, the result is infinity, which is a unstable or non-fixed point. You can't write infinity down as a number, because it isn't, it's a concept of an ever increasing number.

    Otherwise, you could think of it mathematically using the laws on logarithms and thus take division out of the equation altogther:

        log(0/0) = log(0) - log(0)
    

    BUT

        log(0) = -infinity
    

    Again, the problem is the the result is undefined because it's a concept and not a numerical number you can input.

    Having said all this, if you're interested in how to turn an indeterminate form into a determinate form, look up l'Hopital's rule, which effectively says:

    f(x) / g(x) = f'(x) / g'(x)
    

    assuming the limit exists, and therefore you can get a result which is a fixed point instead of a unstable point.

    Hope that helps a little,

    Tony Breyal

    P.S. using the rules of logs is often a good computational way to get around the problems of performing operations which result in numbers which are so infinitesimal small that given the precision of a machine’s floating point values, is indistinguishable from zero. Practical programming example is 'maximum likelihood' which generally has to make use of logs in order to keep solutions stable

提交回复
热议问题