问题
Is there a clever/efficient algorithm for determining the hypotenuse of an angle (i.e. sqrt(a² + b²)
), using fixed point math on an embedded processor without hardware multiply?
回答1:
Unless you're doing this at >1kHz, multiply even on a MCU without hardware MUL
isn't terrible. What's much worse is the sqrt
. I would try to modify my application so it doesn't need to calculate it at all.
Standard libraries would probably be best if you actually need it, but you could look at using Newton's method as a possible alternative. It would require several multiply/divide cycles to perform, however.
AVR resources
- Atmel App note AVR200: Multiply and Divide Routines (pdf)
- This sqrt function on AVR Freaks forum
- Another AVR Freaks post
回答2:
If the result doesn't have to be particularly accurate, you can get a crude approximation quite simply:
Take absolute values of a
and b
, and swap if necessary so that you have a <= b
. Then:
h = ((sqrt(2) - 1) * a) + b
To see intuitively how this works, consider the way that a shallow angled line is plotted on a pixel display (e.g. using Bresenham's algorithm). It looks something like this:
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| | | | | | | | | | | | | | | | |*|*|*| ^
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| | | | | | | | | | | | |*|*|*|*| | | | |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| | | | | | | | |*|*|*|*| | | | | | | | a pixels
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| | | | |*|*|*|*| | | | | | | | | | | | |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|*|*|*|*| | | | | | | | | | | | | | | | v
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
<-------------- b pixels ----------->
For each step in the b
direction, the next pixel to be plotted is either immediately to the right, or one pixel up and to the right.
The ideal line from one end to the other can be approximated by the path which joins the centre of each pixel to the centre of the adjacent one. This is a series of a
segments of length sqrt(2)
, and b-a
segments of length 1 (taking a pixel to be the unit of measurement). Hence the above formula.
This clearly gives an accurate answer for a == 0
and a == b
; but gives an over-estimate for values in between.
The error depends on the ratio b/a
; the maximum error occurs when b = (1 + sqrt(2)) * a
and turns out to be 2/sqrt(2+sqrt(2))
, or about 8.24% over the true value. That's not great, but if it's good enough for your application, this method has the advantage of being simple and fast. (The multiplication by a constant can be written as a sequence of shifts and adds.)
回答3:
For the record, here are a few more approximations, listed in roughly increasing order of complexity and accuracy. All these assume 0 ≤ a ≤ b.
h = b + 0.337 * a // max error ≈ 5.5 %
h = max(b, 0.918 * (b + (a>>1))) // max error ≈ 2.6 %
h = b + 0.428 * a * a / b // max error ≈ 1.04 %
Edit: to answer Ecir Hana's question, here is how I derived these approximations.
First step. Approximating a function of two variables can be a complex problem. Thus I first transformed this into the problem of approximating a function of one variable. This can be done by choosing the longest side as a “scale” factor, as follows:
h = √(b2 + a2)
= b √(1 + (a/b)2)
= b f(a/b) where f(x) = √(1+x2)
Adding the constraint 0 ≤ a ≤ b means we are only concerned with approximating f(x) in the interval [0, 1].
Below is the plot of f(x) in the relevant interval, together with the approximation given by Matthew Slattery (namely (√2−1)x + 1).

Second step. Next step is to stare at this plot, while asking yourself the question “how can I approximate this function cheaply?”. Since the curve looks roughly parabolic, my first idea was to use a quadratic function (third approximation). But since this is still relatively expensive, I also looked at linear and piecewise linear approximations. Here are my three solutions:

The numerical constants (0.337, 0.918 and 0.428) were initially free parameters. The particular values were chosen in order to minimize the maximum absolute error of the approximations. The minimization could certainly be done by some algorithm, but I just did it “by hand”, plotting the absolute error and tuning the constant until it is minimized. In practice this works quite fast. Writing the code to automate this would have taken longer.
Third step is to come back to the initial problem of approximating a function of two variables:
- h ≈ b (1 + 0.337 (a/b)) = b + 0.337 a
- h ≈ b max(1, 0.918 (1 + (a/b)/2)) = max(b, 0.918 (b + a/2))
- h ≈ b (1 + 0.428 (a/b)2) = b + 0.428 a2/b
回答4:
Consider using CORDIC methods. Dr. Dobb's has an article and associated library source here. Square-root, multiply and divide are dealt with at the end of the article.
回答5:
One possibility looks like this:
#include <math.h>
/* Iterations Accuracy
* 2 6.5 digits
* 3 20 digits
* 4 62 digits
* assuming a numeric type able to maintain that degree of accuracy in
* the individual operations.
*/
#define ITER 3
double dist(double P, double Q) {
/* A reasonably robust method of calculating `sqrt(P*P + Q*Q)'
*
* Transliterated from _More Programming Pearls, Confessions of a Coder_
* by Jon Bentley, pg. 156.
*/
double R;
int i;
P = fabs(P);
Q = fabs(Q);
if (P<Q) {
R = P;
P = Q;
Q = R;
}
/* The book has this as:
* if P = 0.0 return Q; # in AWK
* However, this makes no sense to me - we've just insured that P>=Q, so
* P==0 only if Q==0; OTOH, if Q==0, then distance == P...
*/
if ( Q == 0.0 )
return P;
for (i=0;i<ITER;i++) {
R = Q / P;
R = R * R;
R = R / (4.0 + R);
P = P + 2.0 * R * P;
Q = Q * R;
}
return P;
}
This still does a couple of divides and four multiples per iteration, but you rarely need more than three iterations (and two is often adequate) per input. At least with most processors I've seen, that'll generally be faster than the sqrt
would be on its own.
For the moment it's written for double
s, but assuming you've implemented the basic operations, converting it to work with fixed point shouldn't be terribly difficult.
回答6:
You can start by reevaluating if you need the sqrt
at all. Many times you are calculating the hypotenuse just to compare it to another value - if you square the value you're comparing against you can eliminate the square root altogether.
回答7:
Maybe you could use some of Elm Chans Assembler Libraries and adapt the ihypot-function to your ATtiny. You would need to replace the MUL and maybe (i haven't checked) some other instructions.
来源:https://stackoverflow.com/questions/3506404/fast-hypotenuse-algorithm-for-embedded-processor