optimization

Optimal substructure

六眼飞鱼酱① 提交于 2020-01-02 07:46:06
问题 I'm trying to get a fuller picture of the use of the optimal substructure property in dynamic programming, yet I've gone blind on why we have to prove that any optimal solution to the problem contains within it optimal solutions to the sub-problems. Wouldn't it be enough to show that some optimal solution to the problem has this property, and then use this to argue that since the solution built by our recursive algorithm is at least as good as an optimal solution, it will itself be optimal?

Optimizing string parsing with Python

你。 提交于 2020-01-02 07:20:07
问题 I have string in the form 'AB(AB(DDC)C)A(BAAC)DAB(ABC)' . Each character represents an element ( A , B , C or D ). Between parentheses, on the right, there is the child of each element (which may be absent). In example, having 'AB(AB(DDC)C)A(BAAC)DA' , the top level would be AB (AB(DDC)C) A (BAAC) DA --> [A, B, A, D, A] and the corresponding children would be [None, AB(DDC)C, BAAC, None, None] . The children are to be parsed as well recursively. I have implemented a solution here: def parse

Speed of “sum” comprehension in Python

心不动则不痛 提交于 2020-01-02 07:03:14
问题 I was under the impression that using a sum construction was much faster than running a for loop. However, in the following code, the for loop actually runs faster: import time Score = [[3,4,5,6,7,8] for i in range(40)] a=[0,1,2,3,4,5,4,5,2,1,3,0,5,1,0,3,4,2,2,4,4,5,1,2,5,4,3,2,0,1,1,0,2,0,0,0,1,3,2,1] def ver1(): for i in range(100000): total = 0 for j in range(40): total+=Score[j][a[j]] print (total) def ver2(): for i in range(100000): total = sum(Score[j][a[j]] for j in range(40)) print

regular expression very slow on fail

ぐ巨炮叔叔 提交于 2020-01-02 06:53:29
问题 I've a regular expression that should validate if a string is composed by space-delimited strings. The regular expression works well (ok it allows a empty space in the end ... but that's not he problem) but takes too long when the validation fails. The regular expression is the following: /^(([\w\-]+)( )?){0,}$/ When trying to validate with the string "'this-is_SAMPLE-scope-123,this-is_SAMPLE-scope-456'" it takes 2 seconds. The tests were performed in ruby 1.9.2-rc1 and 1.8.7. But this is

Python performance improvement request for winkler

依然范特西╮ 提交于 2020-01-02 06:12:29
问题 I'm a python n00b and I'd like some suggestions on how to improve the algorithm to improve the performance of this method to compute the Jaro-Winkler distance of two names. def winklerCompareP(str1, str2): """Return approximate string comparator measure (between 0.0 and 1.0) USAGE: score = winkler(str1, str2) ARGUMENTS: str1 The first string str2 The second string DESCRIPTION: As described in 'An Application of the Fellegi-Sunter Model of Record Linkage to the 1990 U.S. Decennial Census' by

Python performance improvement request for winkler

早过忘川 提交于 2020-01-02 06:11:15
问题 I'm a python n00b and I'd like some suggestions on how to improve the algorithm to improve the performance of this method to compute the Jaro-Winkler distance of two names. def winklerCompareP(str1, str2): """Return approximate string comparator measure (between 0.0 and 1.0) USAGE: score = winkler(str1, str2) ARGUMENTS: str1 The first string str2 The second string DESCRIPTION: As described in 'An Application of the Fellegi-Sunter Model of Record Linkage to the 1990 U.S. Decennial Census' by

C/C++ Indeterminate Values: Compiler optimization gives different output (example)

心已入冬 提交于 2020-01-02 05:50:11
问题 It seems like the C/C++ compiler (clang, gcc, etc) produces different output related to the optimization level. You may as well check the online link included in this post. http://cpp.sh/5vrmv (change output from none to -O3 to see the differences). Based on the following piece of code, could someone explain a few questions I have: #include <stdio.h> #include <stdlib.h> int main(void) { int *p = (int *)malloc(sizeof(int)); free(p); int *q = (int *)malloc(sizeof(int)); if (p == q) { *p = 10;

cvPyrDown vs cvResize for face detection optimization

假如想象 提交于 2020-01-02 05:45:10
问题 I want to optimize my face detection algorithm by scaling down the image. What is the best way? should I use cvPyrDown (as I saw in one example and yielded poor results so far), cvResize or another function? 回答1: If you only want to scale the image, use cvResize as Adrian Popovici suggested. cvPyrDown will apply a Gaussian blur to smooth the image, then by default it will down-sample the image by a factor of two by rejecting even columns and rows. This smoothing may be degrading your

'memcpy'-like function that supports offsets by individual bits?

独自空忆成欢 提交于 2020-01-02 05:28:13
问题 I was thinking about solving this, but it's looking to be quite a task. If I take this one by myself, I'll likely write it several different ways and pick the best, so I thought I'd ask this question to see if there's a good library that solves this already or if anyone has thoughts/advice. void OffsetMemCpy(u8* pDest, u8* pSrc, u8 srcBitOffset, size size) { // Or something along these lines. srcBitOffset is 0-7, so the pSrc buffer // needs to be up to one byte longer than it would need to be

Bin-packing (or knapsack?) problem

橙三吉。 提交于 2020-01-02 05:08:09
问题 I have a collection of 43 to 50 numbers ranging from 0.133 to 0.005 (but mostly on the small side). I would like to find, if possible, all combinations that have a sum between L and R, which are very close together.* The brute-force method takes 2 43 to 2 50 steps, which isn't feasible. What's a good method to use here? Edit: The combinations will be used in a calculation and discarded. (If you're writing code, you can assume they're simply output; I'll modify as needed.) The number of