What is the cost of inheritance?

前端 未结 7 1131
遥遥无期
遥遥无期 2020-12-06 07:11

This is a pretty basic question but I\'m still unsure:

If I have a class that will be instantiated millions of times -- is it advisable not to derive it from some o

相关标签:
7条回答
  • 2020-12-06 07:49

    Creating a derived object involves calling constructors for all base classes, and destroying them invokes destructors for these classes. The cost depends then on what these constructors do, but then if you don't derive but include the same functionality in derived class, you pay the same cost. In terms of memory, every object of the derived class contains an object of its base class, but again, it's exactly the same memory usage as if you just included all of these fields in the class instead of deriving it.

    Be wary, that in many cases it's a better idea to compose (have a data member of the 'base' class rather than deriving from it), in particular if you're not overriding virtual functions and your relationship between 'derived' and 'base' is not an "is a kind of" relationship. But in terms of CPU and memory usage both these techniques are equivalent.

    0 讨论(0)
  • 2020-12-06 07:52

    Inheriting from a class costs nothing at runtime.

    The class instances will of course take up more memory if you have variables in the base class, but no more than if they were in the derived class directly and you didn't inherit from anything.

    This does not take into account virtual methods, which do incur a small runtime cost.

    tl;dr: You shouldn't be worrying about it.

    0 讨论(0)
  • 2020-12-06 07:58

    If you need the functionality of FooBase in Foo, either you can derive or use composition. Deriving has the cost of the vtable, and FooBase has the cost of a pointer to a FooBase, the FooBase, and the FooBase's vtable. So they are (roughly) similar and you shouldn't have to worry about the cost of inheritance.

    0 讨论(0)
  • 2020-12-06 07:59

    I think we all guys have been programming too much as lone wolf .. We forget to take cost of maintenance + readability + extensions with regards to features. Here is my take

    Inheritance Cost++

    1. On smaller projects : time to develop increases. Easy to write all global sudoku code. Always has it taken more time for me, to write a class inheritance to do the right_thing.
    2. On smaller projects : Time to modify increases. It is not always easy to modify the existing code to confirm the existing interface.
    3. Time to design increases.
    4. Program is slightly inefficient due to multiple message passing, rather than exposed gut(I mean data members. :))
    5. Only for the virtual function calls via pointer to base class, there one single extra dereference.
    6. There is a small space penalty in terms of RTTI
    7. For sake of completeness I will add that, too many classes will add too many types and that is bound to increase your compilation time, no matter how small it might be.
    8. There is also cost of tracking multiple objects in terms of base class object and all for run-time system, which obviously mean a slight increase in code size + slight runtime performance penalty due to the exception delegation mechanism(whether you use it or not).
    9. You dont have to twist your arm unnaturally in a way of PIMPL, if all you want to do is to insulate users of your interface functions from getting recompiled. (This IS a HEAVY cost, trust me.)

    Inheritance Cost--

    1. As the program size grows larger than 1/2 thousand lines, it is more maintainable with inheritance. If you are the only one programming then you can easily push code without object upto 4k/5k lines.
    2. Cost of bug fixing reduces.
    3. You can easily extend the existing framework for more challenging tasks.

    I know I am being a little devils advocate, but I think we gotta be fair.

    0 讨论(0)
  • 2020-12-06 08:01

    i'm a bit surprised by some the responses/comments so far...

    does inheritance carry some cost (in terms of memory)

    Yes. Given:

    namespace MON {
    class FooBase {
    public:
        FooBase();
        virtual ~FooBase();
        virtual void f();
    private:
        uint8_t a;
    };
    
    class Foo : public FooBase {
    public:
        Foo();
        virtual ~Foo();
        virtual void f();
    private:
        uint8_t b;
    };
    
    class MiniFoo {
    public:
        MiniFoo();
        ~MiniFoo();
        void f();
    private:
        uint8_t a;
        uint8_t b;
    };
    
        class MiniVFoo {
        public:
            MiniVFoo();
            virtual ~MiniVFoo();
            void f();
        private:
            uint8_t a;
            uint8_t b;
        };
    
    } // << MON
    
    extern "C" {
    struct CFoo {
        uint8_t a;
        uint8_t b;
    };
    }
    

    on my system, the sizes are as follows:

    32 bit: 
        FooBase: 8
        Foo: 8
        MiniFoo: 2
        MiniVFoo: 8
        CFoo: 2
    
    64 bit:
        FooBase: 16
        Foo: 16
        MiniFoo: 2
        MiniVFoo: 16
        CFoo: 2
    

    runtime to construct or destroy an object

    additional function overhead and virtual dispatch where needed (including destructors where appropriate). this can cost a lot and some really obvious optimizations such as inlining may/can not be performed.

    the entire subject is much more complex, but that will give you an idea of the costs.

    if the speed or size is truly critical, then you can often use static polymorphism (e.g. templates) to achieve an excellent balance between performance and ease to program.

    regarding cpu performance, i created a simple test which created millions of these types on the stack and on the heap and called f, the results are:

    FooBase 16.9%
    Foo 16.8%
    Foo2 16.6%
    MiniVFoo 16.6%
    MiniFoo 16.2%
    CFoo 15.9%
    

    note: Foo2 derives from foo

    in the test, the allocations are added to a vector, then deleted. without this stage, the CFoo was entirely optimized away. as Jeff Dege posted in his answer, allocation time will be a huge part of this test.

    Pruning the allocation functions and vector create/destroy from the sample produces these numbers:

    Foo 19.7%
    FooBase 18.7%
    Foo2 19.4%
    MiniVFoo 19.3%
    MiniFoo 13.4%
    CFoo 8.5%
    

    which means the virtual variants take over twice as long as the CFoo to execute their constructors, destructors and calls, and MiniFoo is about 1.5 times faster.

    while we're on allocation: if you can use a single type for your implementation, you also reduce the number of allocations you must make in this scenario because you can allocate an array of 1M objects, rather than creating a list of 1M addresses and then filling it with uniquely new'ed types. of course, there are special purpose allocators which can reduce this weight. since allocations/free times are the weight of this test, it would significantly reduce the time you spend allocating and freeing objects.

    Create many MiniFoos as array 0.2%
    Create many CFoos as array 0.1%
    

    Also keep in mind that the sizes of MiniFoo and CFoo consume 1/4 - 1/8 the memory per element, and a contiguous allocation removes the need to store pointers to dynamic objects. You could then keep track of the object in an implementation more ways (pointer or index), but the array can also significantly reduce allocation demends on clients (uint32_t vs pointer on a 64 bit arch) -- plus all the bookkeeping required by the system for the allocations (which is significant when dealing with so many small allocations).

    Specifically, the sizes in this test consumed:

    32 bit
        267MB for dynamic allocations (worst)
        19MB for the contiguous allocations
    64 bit
        381MB for dynamic allocations (worst)
        19MB for the contiguous allocations
    

    this means that the required memory was reduced by more than ten, and the times spent allocating/freeing is significantly better than that!

    Static dispatch implementations vs mixed or dynamic dispatch can be several times faster. This typically gives the optimizers more opportunuities to see more of the program and optimize it accordingly.

    In practice, dynamic types tend to export more symbols (methods, dtors, vtables), which can noticably increase the binary size.

    Assuming this is your actual use case, then you can improve the performance and resource usage significantly. i've presented a number of major optimizations... just in case somebody believes changing the design in such a way would qualify as 'micro'-optimizations.

    0 讨论(0)
  • 2020-12-06 08:05

    Largely, this depends upon the implementation. But there are some commonalities.

    If your inheritance tree includes any virtual functions, the compiler will need to create a vtable for each class - a jump table with pointers to the various virtual functions. Every instance of those classes will carry along a hidden pointer to its class's vtable.

    And any call to a virtual function will involve a hidden level of indirection - rather than jumping to a function address that had been resolved at link time, a call will involve reading the address from the vtable and then jumping to that.

    Generally speaking, this overhead isn't likely to be measurable on any but the most time-critical software.

    OTOH, you said you'd be instantiating and destroying millions of these objects. In most cases, the largest cost isn't constructing the object, but allocating memory for it.

    IOW, you might benefit from using your own custom memory allocators, for the class.

    http://www.cprogramming.com/tutorial/operator_new.html

    0 讨论(0)
提交回复
热议问题