Partial template function specialization with enable_if: make default implementation

こ雲淡風輕ζ 提交于 2021-02-17 18:54:10

问题


Using C++11's enable_if I want to define several specialized implementations for a function (based on the type of the parameter, say) as well as a default implementation. What is the correct way to define it?

The following example does not work as intended since the "generic" implementation is called, whatever the type T.

#include <iostream>

template<typename T, typename Enable = void>
void dummy(T t)
{
  std::cout << "Generic: " << t << std::endl;
}


template<typename T, typename std::enable_if<std::is_integral<T>::value>::type>
void dummy(T t)
{
  std::cout << "Integral: " << t << std::endl;
}


template<typename T, typename std::enable_if<std::is_floating_point<T>::value>::type>
void dummy(T t)
{
  std::cout << "Floating point: " << t << std::endl;
}

int main() {
  dummy(5); // Print "Generic: 5"
  dummy(5.); // Print "Generic: 5"
}

One solution in my minimal example consists in explicitly declaring the "generic" implementation as not for integral nor floating point types, using

std::enable_if<!std::is_integral<T>::value && !std::is_floating_point<T>::value>::type

This is exactly what I want to avoid, since in my real use cases there are a lot of specialized implementations and I would like to avoid a very long (error prone!) condition for the default implementation.


回答1:


Function cannot be partially specialized. I assume what you want to do is to prefer those overloads which contains explicit condition? One way to achieve that is by using variadic arguments ellipsis in declaration of the default function as the ellipsis function have lower priority in overload resolution order:

#include <iostream>

template<typename T>
void dummy_impl(T t, ...)
{
  std::cout << "Generic: " << t << std::endl;
}


template<typename T, typename std::enable_if<std::is_integral<T>::value>::type* = nullptr>
void dummy_impl(T t, int)
{
  std::cout << "Integral: " << t << std::endl;
}


template<typename T, typename std::enable_if<std::is_floating_point<T>::value>::type* = nullptr>
void dummy_impl(T t, int)
{
  std::cout << "Floating point: " << t << std::endl;
}

template <class T>
void dummy(T t) {
   dummy_impl(t, int{});
}

int main() {
  dummy(5); 
  dummy(5.); 
  dummy("abc"); 
}

Output:

Integral: 5
Floating point: 5
Generic: abc

[live demo]

Another option as @doublep mention in comment is by use of structure with implementation of your function and then partially specialize it.




回答2:


You can introduce a rank to give priority to some of your overloads:

template <unsigned int N>
struct rank : rank<N - 1> { };

template <>
struct rank<0> { };

You can then define your dummy overloads like this:

template<typename T>
void dummy(T t, rank<0>)
{
    std::cout << "Generic: " << t << std::endl;
}

template<typename T, 
         typename std::enable_if<std::is_integral<T>::value>::type* = nullptr>
void dummy(T t, rank<1>)
{
    std::cout << "Integral: " << t << std::endl;
}

template<typename T, 
         typename std::enable_if<std::is_floating_point<T>::value>::type* = nullptr>
void dummy(T t, rank<1>)
{
    std::cout << "Floating point: " << t << std::endl;
}

Then, you can hide the call behind a dispatch:

template <typename T>
void dispatch(T t)
{
   return dummy(t, rank<1>{});
}

Usage:

int main() 
{
    dispatch(5);    // Print "Integral: 5"
    dispatch(5.);   // Print "Floating point: 5"
    dispatch("hi"); // Print "Generic: hi"
}

live example on wandbox


Explanation:

Using rank introduces "priority" because implicit conversions are required to convert a rank<X> to a rank<Y> when X > Y. dispatch first tries to call dummy with rank<1>, giving priority to your constrained overloads. If enable_if fails, rank<1> is implicitly converted to rank<0> and enters the "fallback" case.


Bonus: here's a C++17 implementation using if constexpr(...).

template<typename T>
void dummy(T t)
{
    if constexpr(std::is_integral_v<T>)
    {
        std::cout << "Integral: " << t << std::endl;
    }
    else if constexpr(std::is_floating_point_v<T>)
    {
        std::cout << "Floating point: " << t << std::endl;
    }
    else
    {
        std::cout << "Generic: " << t << std::endl;
    }
}

live example on wandbox




回答3:


I would use tag dispatching like so:

namespace Details
{
    namespace SupportedTypes
    {
        struct Integral {};
        struct FloatingPoint {};
        struct Generic {};
    };


    template <typename T, typename = void>
    struct GetSupportedType
    {
        typedef SupportedTypes::Generic Type;
    };

    template <typename T>
    struct GetSupportedType< T, typename std::enable_if< std::is_integral< T >::value >::type >
    {
        typedef SupportedTypes::Integral Type;
    };

    template <typename T>
    struct GetSupportedType< T, typename std::enable_if< std::is_floating_point< T >::value >::type >
    {
        typedef SupportedTypes::FloatingPoint Type;
    };

    template <typename T>
    void dummy(T t, SupportedTypes::Generic)
    {
        std::cout << "Generic: " << t << std::endl;
    }

    template <typename T>
    void dummy(T t, SupportedTypes::Integral)
    {
        std::cout << "Integral: " << t << std::endl;
    }

    template <typename T>
    void dummy(T t, SupportedTypes::FloatingPoint)
    {
        std::cout << "Floating point: " << t << std::endl;
    }
} // namespace Details

And then hide the boiler plate code like so:

template <typename T>
void dummy(T t)
{
    typedef typename Details::GetSupportedType< T >::Type SupportedType;
    Details::dummy(t, SupportedType());
}

GetSupportedType gives you one central way to guess the actual type you are going to use, that's the one you want to specialize everytime you add a new type.

Then you just invoke the right dummy overload by providing an instance of the right tag.

Finally, invoke dummy:

dummy(5); // Print "Generic: 5"
dummy(5.); // Print "Floating point: 5"
dummy("lol"); // Print "Generic: lol"


来源:https://stackoverflow.com/questions/44585504/partial-template-function-specialization-with-enable-if-make-default-implementa

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!