How do I use floating point number literals when using generic types?

一世执手 提交于 2019-11-27 05:39:09

Use the FromPrimitive trait:

use num_traits::{cast::FromPrimitive, float::Float};

fn scale_float<T: Float + FromPrimitive>(x: T) -> T {
    x * T::from_f64(0.54).unwrap()
}

Or the standard library From / Into traits

fn scale_float<T>(x: T) -> T
where
    T: Float,
    f64: Into<T>
{
    x * 0.54.into()
}

See also:

You can't create a Float from a literal directly. I suggest an approach similar to the FloatConst trait:

trait SomeDomainSpecificScaleFactor {
    fn factor() -> Self;
}

impl SomeDomainSpecificScaleFactor for f32 {
    fn factor() -> Self {
        0.54
    }
}

impl SomeDomainSpecificScaleFactor for f64 {
    fn factor() -> Self {
        0.54
    }
}

fn scale_float<T: Float + SomeDomainSpecificScaleFactor>(x: T) -> T {
    x * T::factor()
}

(link to playground)

In certain cases, you can add a restriction that the generic type must be able to be multiplied by the type of the literal. Here, we allow any type that can be multiplied by a f64 so long as it produces the output type of T via the trait bound Mul<f64, Output = T>:

use num_traits::float::Float; // 0.2.6
use std::ops::Mul;

fn scale_float<T>(x: T) -> T
where
    T: Float + Mul<f64, Output = T>,
{
    x * 0.54
}

fn main() {
    let a: f64 = scale_float(1.23);
}

This may not work directly for the original problem, but it might depending on what concrete types you need to work with.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!