optimization

Javascript multiple asignment re-evaluation or result passing?

好久不见. 提交于 2020-01-06 03:18:04
问题 The multiple assignment (or is it called chaining ?) that I'm talking about is assignment such as: a = b = c = 2; ...after which a , b , and c are all equal to 2; My question of optimization is, given the following code: var dom1 = document.getElementById('layout_logos'); var dom2 = document.getElementById('layout_sitenav'); ... function layout_onscroll(){ ... dom1.style.height = dom2.style.top = maxLogoHeight - scrollTop; ... } From what I've been reading, I'm afraid the code in layout

Javascript multiple asignment re-evaluation or result passing?

允我心安 提交于 2020-01-06 03:18:01
问题 The multiple assignment (or is it called chaining ?) that I'm talking about is assignment such as: a = b = c = 2; ...after which a , b , and c are all equal to 2; My question of optimization is, given the following code: var dom1 = document.getElementById('layout_logos'); var dom2 = document.getElementById('layout_sitenav'); ... function layout_onscroll(){ ... dom1.style.height = dom2.style.top = maxLogoHeight - scrollTop; ... } From what I've been reading, I'm afraid the code in layout

Is it faster to pass a struct or its members individually?

[亡魂溺海] 提交于 2020-01-06 02:12:14
问题 If I have a struct Foo { int A; int B; }; Which method call is faster? void Bar(Foo MyFoo) or void Bar(int A, int B) ? Or are they the same? 回答1: This can actually be platform and compiler dependent. Especially when the sizeof(Foo) is small. If the ints are 16 bit, only one stack push of 32 bits is needed, which is less than a pointer on 64 bits architectures, but on those ints are often more than 16 bits. As separate arguments, each will be a separate push to the stack, unless the compiler

Is it faster to pass a struct or its members individually?

夙愿已清 提交于 2020-01-06 02:12:11
问题 If I have a struct Foo { int A; int B; }; Which method call is faster? void Bar(Foo MyFoo) or void Bar(int A, int B) ? Or are they the same? 回答1: This can actually be platform and compiler dependent. Especially when the sizeof(Foo) is small. If the ints are 16 bit, only one stack push of 32 bits is needed, which is less than a pointer on 64 bits architectures, but on those ints are often more than 16 bits. As separate arguments, each will be a separate push to the stack, unless the compiler

How do I profile how long a piece of code takes to execute in Objective-C/Cocoa for optimization purposes

房东的猫 提交于 2020-01-06 01:58:09
问题 Lets say I've got two interchangeable pieces of code and I want to figure out which one of them takes less processor time to execute. How would I do this? To get a very rough estimation I could just put NSLog() calls on either side of the code I wanted to profile, but it seems like the processor being otherwise very busy could skew the results. 回答1: Unless one of these two pieces of code is already in your app, and you've already profiled your app's overall performance to determine that the

optimise “binary_fold” algorithm and make it left (or right) associative

天大地大妈咪最大 提交于 2020-01-05 23:33:03
问题 Following my original question and considering some of the proposed solutions I came up with this for C++14: #include <algorithm> #include <exception> #include <iterator> #include <cstddef> template<class It, class Func> auto binary_fold(It begin, It end, Func op) -> decltype(op(*begin, *end)) { std::ptrdiff_t diff = end - begin; switch (diff) { case 0: throw std::out_of_range("binary fold on empty container"); case 1: return *begin; case 2: return op(*begin, *(begin + 1)); default: { //

optimise “binary_fold” algorithm and make it left (or right) associative

旧巷老猫 提交于 2020-01-05 23:32:10
问题 Following my original question and considering some of the proposed solutions I came up with this for C++14: #include <algorithm> #include <exception> #include <iterator> #include <cstddef> template<class It, class Func> auto binary_fold(It begin, It end, Func op) -> decltype(op(*begin, *end)) { std::ptrdiff_t diff = end - begin; switch (diff) { case 0: throw std::out_of_range("binary fold on empty container"); case 1: return *begin; case 2: return op(*begin, *(begin + 1)); default: { //

Optimize array query match with operator $all in MongoDb

ⅰ亾dé卋堺 提交于 2020-01-05 15:05:30
问题 In a collection of 130k elements with the structure: { "tags": ["restaurant", "john doe"] } There are 40k documents with "restaurant" tag but only 2 with "john doe". So the next queries are different: // 0.100 seconds (40.000 objects scanned) {"tags": {$all: [/^restaurant/, /^john doe/]}} // 0.004 seconds (2 objects scanned) {"tags": {$all: [/^john doe/, /^restaurant/]}} It's there a way to optimize the query without sorting the tags in the client? The only way I can imagine now is putting

How much does parallelization help the performance if the program is memory-bound?

偶尔善良 提交于 2020-01-05 14:09:32
问题 I parallelized a Java program. On a Mac with 4 cores, below is the time for different number of threads. threads # 1 2 4 8 16 time 2597192200 1915988600 2086557400 2043377000 1931178200 On a Linux server with two sockets, each with 4 cores, below is the measured time. threads # 1 2 4 8 16 time 4204436859 2760602109 1850708620 2370905549 2422668438 As you seen, the speedup is far away from linear speedup. There is almost no parallelization overhead in this case, like synchronization, or I/O

ios low level serialisation

馋奶兔 提交于 2020-01-05 09:22:58
问题 Here is a Java / Android function: public static byte[] serialize(MyClass myObject) { int byteArraySizeNeeded = getByteArraySizeNeededForSerialize(myObject); byte[] result = new byte[byteArraySizeNeeded]; ByteBuffer bb = ByteBuffer.wrap(result); bb.putShort((short) (byteArraySizeNeeded-2));// 2 - not need to read at deserialisation! bb.putLong(myObject.getProperty1ValueWhichIsALong());// 8 bb.putDouble(myObject.getProperty2ValueWhichIsADouble());// 8 bb.putFloat(myObject