optimization

MongoDB Text Search on Large Dataset

只谈情不闲聊 提交于 2020-05-15 07:11:52
问题 I have a book collection with currently 7.7 million records and I have setup a text index as follows that will allow me to search the collection by title and author as follows: db.book.createIndex( { title: "text", author: "text" }, {sparse: true, background: true, weights: {title: 15, author: 5}, name: "text_index"} ) The problem is when I use a search query that will return a lot of results eg John and then sort by the textScore the time to perform the query is over 60 seconds. Please see

MongoDB Text Search on Large Dataset

☆樱花仙子☆ 提交于 2020-05-15 07:11:43
问题 I have a book collection with currently 7.7 million records and I have setup a text index as follows that will allow me to search the collection by title and author as follows: db.book.createIndex( { title: "text", author: "text" }, {sparse: true, background: true, weights: {title: 15, author: 5}, name: "text_index"} ) The problem is when I use a search query that will return a lot of results eg John and then sort by the textScore the time to perform the query is over 60 seconds. Please see

Need help fixing an algorithm that approximates pi

被刻印的时光 ゝ 提交于 2020-05-15 06:37:05
问题 I'm trying to write the C code for an algorithm that approximates pi . It's supposed to get the volume of a cube and the volume of a sphere inside that cube (the sphere's radius is 1/2 of the cube's side). Then I am supposed to divide the cube's volume by the sphere's and multiply by 6 to get pi. It's working but it's doing something weird in the part that is supposed to get the volumes. I figure it's something to do the with delta I chose for the approximations. With a cube of side 4 instead

GCC multiple optimization flags

丶灬走出姿态 提交于 2020-05-14 14:38:09
问题 I have some legacy code that compiles with both -02 and -03 set. From the GCC man file I get the guarantee that: -O3 turns on all optimizations specified by -O2 and also turns on the -finline-functions, -funswitch-loops, -fpredictive-commoning, -fgcse-after-reload and -ftree-vectorize options. So, at first glance it would seem likely that turning both of these flags on would be the same as just -O3. However, that got me thinking is that the right thing to do in that case as -O2 is probably

C++ Composition with abstract class

放肆的年华 提交于 2020-05-12 04:59:08
问题 Lets say I have an abstract class that is expensive to create and copy: class AbstractBase { public: AbstractBase() { for (int i = 0; i < 50000000; ++i) { values.push_back(i); } } virtual void doThing() = 0; private: vector<int> values; }; It has two subclasses FirstDerived : class FirstDerived : public AbstractBase { public: void doThing() { std::cout << "I did the thing in FirstDerived!\n"; } }; and SecondDerived : class SecondDerived : public AbstractBase { public: void doThing() { std:

Is there a way to make this function faster? (C)

三世轮回 提交于 2020-05-11 06:38:04
问题 I have a code in C which does additions in the same way as a human does, so if for example I have two arrays A[0..n-1] and B[0..n-1] , the method will do C[0]=A[0]+B[0] , C[1]=A[1]+B[1] ... I need help in making this function faster, even if the solution is using intrinsics. My main problem is that I have a really big dependency problem, as the iteration i+1 depends on the carry of the iteration i , as long as I use base 10. So if A[0]=6 and B[0]=5 , C[0] must be 1 and I have a carry of 1 for

Haskell: how to detect “lazy memory leaks”

霸气de小男生 提交于 2020-05-10 03:26:27
问题 After few hours of debugging, I realized that a very simple toy example was not efficient due to a missing ! in an expression return $ 1 + x (thanks duplode!... but how come ghc does not optimize that??). I also realized it because I was comparing it with a Python code that was quicker, but I won't always write Python code to benchmark my code... So here is my question: is there a way to automatically detect these "lazy memory leaks", that slow down a program for no real reason? I'm still

How to vectorize and optimize this functions in C?

十年热恋 提交于 2020-05-09 09:56:58
问题 I have this functions, the result is correct but the compiler don't vectorize this. How can I achive that the compiler vectorize this and how can I optimize this codes? void LongNumSet( char *L, unsigned N, char digit ) { for (int i = 0; i < N; ++i){ L[i] = digit; } } void LongNumCopy( char *Vin, char *Vout, unsigned N ) { for ( int i=0; i< N; ++i ) { Vout[i] = Vin[i]; } } char LongNumAddition( char *__restrict Vin1, char * __restrict Vin2, char * __restrict Vout, unsigned N ) { char CARRY =

How to vectorize and optimize this functions in C?

淺唱寂寞╮ 提交于 2020-05-09 09:55:28
问题 I have this functions, the result is correct but the compiler don't vectorize this. How can I achive that the compiler vectorize this and how can I optimize this codes? void LongNumSet( char *L, unsigned N, char digit ) { for (int i = 0; i < N; ++i){ L[i] = digit; } } void LongNumCopy( char *Vin, char *Vout, unsigned N ) { for ( int i=0; i< N; ++i ) { Vout[i] = Vin[i]; } } char LongNumAddition( char *__restrict Vin1, char * __restrict Vin2, char * __restrict Vout, unsigned N ) { char CARRY =

dict.get(key, default) vs dict.get(key) or default

江枫思渺然 提交于 2020-05-09 06:11:21
问题 Is there any difference (performance or otherwise) between the following two statements in Python? v = my_dict.get(key, some_default) vs v = my_dict.get(key) or some_default 回答1: There is a huge difference if your value is false-y: >>> d = {'foo': 0} >>> d.get('foo', 'bar') 0 >>> d.get('foo') or 'bar' 'bar' You should not use or default if your values can be false-y. On top of that, using or adds additional bytecode; a test and jump has to be performed. Just use dict.get() , there is no