compiler-optimization

Does the swift compiler/linker automatically remove unused methods/classes/extensions, etc.?

梦想与她 提交于 2019-12-05 14:28:41
We have a lot of code which is usable in any iOS application we write. Things such as: Custom/Common Controls Extensions on common objects like UIView, UIImage and UIViewController Global utility functions Global constants Related sets of files that make up common 'features' like a picker screen that can be used with anything that can be enumerated. For reasons unrelated to this question, we cannot use static or dynamic libraries. These must be included in the project as actual source files. There are several hundred of these 'core' files so what I've been doing is adding all the files to the

Create a function that always returns zero, but the optimizer doesn't know

最后都变了- 提交于 2019-12-05 13:32:15
I would like to create a function that always returns zero, but this fact should not be obvious to the optimizer, so that subsequent calculations using the value won't constant-fold away due to the "known zero" status. In the absence of link-time optimization, this is generally as simple as putting this in its own compilation unit: int zero() { return 0; } The optimizer can't see across units, so the always-zero nature of this function won't be discovered. However, I need something that works with LTO and with as many possible future clever optimizations as well. I considered reading from a

why not allow common subexpression elimination on const nonvolatile member functions?

喜夏-厌秋 提交于 2019-12-05 13:20:48
One of the goals of C++ is to allow user-defined types to behave as nicely as built-in types. One place where this seems to fail is in compiler optimization. If we assume that a const nonvolatile member function is the moral equivalent of a read (for a user-defined type), then why not allow a compiler to eliminate repeated calls to such a function? For example class C { ... public: int get() const; } int main() { C c; int x{c.get()}; x = c.get(); // why not allow the compiler to eliminate this call } The argument for allowing this is the same as the argument for copy elision: while it changes

LLVM and the future of optimization

我是研究僧i 提交于 2019-12-05 12:44:21
问题 I realize that LLVM has a long way to go, but theoretically, can the optimizations that are in GCC/ICC/etc. for individual languages be applied to LLVM byte code? If so, does this mean that any language that compiles to LLVM byte code has the potential to be equally as fast? Or are language specific optimizations (before the LLVM bytecode stage) going to always play a large part in optimizing any specific program. I don't know much about compilers or optimizations (only enough to be dangerous

Using `size_t` for lengths impacts on compiler optimizations?

本秂侑毒 提交于 2019-12-05 12:38:09
While reading this question , I've seen the first comment saying that: size_t for length is not a great idea, the proper types are signed ones for optimization/UB reasons. followed by another comment supporting the reasoning. Is it true? The question is important, because if I were to write e.g. a matrix library, the image dimensions could be size_t , just to avoid checking if they are negative. But then all loops would naturally use size_t . Could this impact on optimization? size_t being unsigned is mostly an historical accident - if your world is 16 bit, going from 32767 to 65535 maximum

What's the difference between partial evaluation and function inlining in a functional language?

前提是你 提交于 2019-12-05 12:33:08
问题 I know that: Function inlining is to replace a function call with the function definition. Partial evaluation is to evaluate the known (static) parts of a program at compile time. There is a distinction between the two in imperative languages like C, where operators are distinct from functions. However, is there any difference between the two in functional languages like Haskell where operators are functions too? Is the only difference between the two that function inlining can be performed

What's the advantage of compiler instruction scheduling compared to dynamic scheduling? [closed]

≡放荡痞女 提交于 2019-12-05 12:05:46
Nowadays, super-scalar RISC cpus usually support out-of-order execution, with branch prediction and speculative execution. They schedule work dynamically. What's the advantage of compiler instruction scheduling, compared to an out-of-order CPU's dynamic scheduling? Does compile-time static scheduling matter at all for an out-of-order CPU, or only for simple in-order CPUs? It seems currently most software instruction scheduling work focuses on VLIW or simple CPUs. The GCC wiki's scheduling page also shows not much interest in updating gcc's scheduling algorithms. Advantage of static (compiler)

Crash in C++ code due to undefined behaviour or compiler bug?

天大地大妈咪最大 提交于 2019-12-05 11:06:58
问题 I am experiencing strange crashes. And I wonder whether it is a bug in my code, or the compiler. When I compile the following C++ code with Microsoft Visual Studio 2010 as an optimized release build, it crashes in the marked line: struct tup { int x; int y; }; class C { public: struct tup* p; struct tup* operator--() { return --p; } struct tup* operator++(int) { return p++; } virtual void Reset() { p = 0;} }; int main () { C c; volatile int x = 0; struct tup v1; struct tup v2 = {0, x}; c.p =

Can compiler optimization elminate a function repeatedly called in a for-loop's conditional?

徘徊边缘 提交于 2019-12-05 10:49:53
I was reading about hash functions (i'm an intermediate CS student) and came across this: int hash (const string & key, int tableSize) { int hasVal = 0; for (int i = 0; i < key.length(); i++) hashVal = 37 * hashVal + key[i]; ..... return hashVal; } I was looking at this code and noticed that it would be faster if in the for-loop instead of calling key.length() each time we instead did this: int n = key.length(); for (int i = 0; i < n; i++) My question is, since this is such an obvious way to slightly improve performance does the compiler automatically do this for us? I don't yet know much

Empty derived optimization

假如想象 提交于 2019-12-05 10:44:35
Most C++ programmers know about the empty base class optimazation as a technique / idiom . What happens with empty child classes? For example class EmptyBase { int i; }; template<typename T> class Derived : T { }; std::cout << sizeof(Derived<EmptyBase>); // Is there a standard verdic on this? Similarly to the EBO there should be an EDO stating that since a derived class doesn't provide any more members, nor introduces any virtual ones to its parametrizing type , it should not require more memory. Considering the various situations in which something like that might apper (multiple inheritance,