问题
I wanted to implement a function computing the number of digits within any generic type of integer. Here is the code I came up with:
extern crate num;
use num::Integer;
fn int_length<T: Integer>(mut x: T) -> u8 {
if x == 0 {
return 1;
}
let mut length = 0u8;
if x < 0 {
length += 1;
x = -x;
}
while x > 0 {
x /= 10;
length += 1;
}
length
}
fn main() {
println!(\"{}\", int_length(45));
println!(\"{}\", int_length(-45));
}
And here is the compiler output
error[E0308]: mismatched types
--> src/main.rs:5:13
|
5 | if x == 0 {
| ^ expected type parameter, found integral variable
|
= note: expected type `T`
found type `{integer}`
error[E0308]: mismatched types
--> src/main.rs:10:12
|
10 | if x < 0 {
| ^ expected type parameter, found integral variable
|
= note: expected type `T`
found type `{integer}`
error: cannot apply unary operator `-` to type `T`
--> src/main.rs:12:13
|
12 | x = -x;
| ^^
error[E0308]: mismatched types
--> src/main.rs:15:15
|
15 | while x > 0 {
| ^ expected type parameter, found integral variable
|
= note: expected type `T`
found type `{integer}`
error[E0368]: binary assignment operation `/=` cannot be applied to type `T`
--> src/main.rs:16:9
|
16 | x /= 10;
| ^ cannot use `/=` on type `T`
I understand that the problem comes from my use of constants within the function, but I don\'t understand why the trait specification as Integer
doesn\'t solve this.
The documentation for Integer says it implements the PartialOrd
, etc. traits with Self
(which I assume refers to Integer
). By using integer constants which also implement the Integer
trait, aren\'t the operations defined, and shouldn\'t the compiler compile without errors?
I tried suffixing my constants with i32
, but the error message is the same, replacing _
with i32
.
回答1:
Many things are going wrong here:
- As Shepmaster says,
0
and1
cannot be converted to everything implementingInteger
. UseZero::zero
andOne::one
instead. 10
can definitely not be converted to anything implementingInteger
, you need to useNumCast
for thata /= b
is not sugar fora = a / b
but an separate trait thatInteger
does not require.-x
is an unary operation which is not part ofInteger
but requires theNeg
trait (since it only makes sense for signed types).
Here's an implementation. Note that you need a bound on Neg
, to make sure that it results in the same type as T
extern crate num;
use num::{Integer, NumCast};
use std::ops::Neg;
fn int_length<T>(mut x: T) -> u8
where
T: Integer + Neg<Output = T> + NumCast,
{
if x == T::zero() {
return 1;
}
let mut length = 0;
if x < T::zero() {
length += 1;
x = -x;
}
while x > T::zero() {
x = x / NumCast::from(10).unwrap();
length += 1;
}
length
}
fn main() {
println!("{}", int_length(45));
println!("{}", int_length(-45));
}
回答2:
The problem is that the Integer
trait can be implemented by anything. For example, you could choose to implement it on your own struct! There wouldn't be a way to convert the literal 0
or 1
to your struct. I'm too lazy to show an example of implementing it, because there's 10 or so methods. ^_^
This is why Zero::zero and One::one exist. You can (very annoyingly) create all the other constants from repeated calls to those.
You can also use the From
and Into
traits to convert to your generic type:
extern crate num;
use num::Integer;
use std::ops::{DivAssign, Neg};
fn int_length<T>(mut x: T) -> u8
where
T: Integer + Neg<Output = T> + DivAssign,
u8: Into<T>,
{
let zero = 0.into();
if x == zero {
return 1;
}
let mut length = 0u8;
if x < zero {
length += 1;
x = -x;
}
while x > zero {
x /= 10.into();
length += 1;
}
length
}
fn main() {
println!("{}", int_length(45));
println!("{}", int_length(-45));
}
See also:
- What is the proper way to get literals when using the Float trait?
来源:https://stackoverflow.com/questions/28565440/how-do-i-use-integer-number-literals-when-using-generic-types