optimization

Fastest (most Pythonic) way to consume an iterator

泪湿孤枕 提交于 2020-06-24 21:43:52
问题 I am curious what the fastest way to consume an iterator would be, and the most Pythonic way. For example, say that I want to create an iterator with the map builtin that accumulates something as a side-effect. I don't actually care about the result of the map , just the side effect, so I want to blow through the iteration with as little overhead or boilerplate as possible. Something like: my_set = set() my_map = map(lambda x, y: my_set.add((x, y)), my_x, my_y) In this example, I just want to

Fastest (most Pythonic) way to consume an iterator

£可爱£侵袭症+ 提交于 2020-06-24 21:42:26
问题 I am curious what the fastest way to consume an iterator would be, and the most Pythonic way. For example, say that I want to create an iterator with the map builtin that accumulates something as a side-effect. I don't actually care about the result of the map , just the side effect, so I want to blow through the iteration with as little overhead or boilerplate as possible. Something like: my_set = set() my_map = map(lambda x, y: my_set.add((x, y)), my_x, my_y) In this example, I just want to

Save and load custom optimizers for continued training in TensorFlow

妖精的绣舞 提交于 2020-06-24 14:53:07
问题 My question is essentially the exact same as that specified here but without using the Keras backend. Namely, how does one save and restore custom optimizers to their last state in TensorFlow (e.g. L-BFGS-B , Adam) when continuing training? As per the solution here for the Adam optimizer specifically, it appears one approach is to use tf.add_collection and tf.get_collection , but that appears to not work if I need to restore the optimizer in a new session/shell. I have written a simple test

Initial Guess/Warm start in CVXPY: give a hint of the solution

血红的双手。 提交于 2020-06-22 12:54:53
问题 In this bit of code: import cvxpy as cvx # Examples: linear programming # Create two scalar optimization variables. x = cvx.Variable() y = cvx.Variable() # Create 4 constraints. constraints = [x >= 0, y >= 0, x + y >= 1, 2*x + y >= 1] # Form objective. obj = cvx.Minimize(x+y) # Form and solve problem. prob = cvx.Problem(obj, constraints) prob.solve(warm_start= True) # Returns the optimal value. print ("status:", prob.status) print ("optimal value", prob.value) print ("optimal var", x.value, y

unit commitment problem using piecewise-linear approximation become MIQP

橙三吉。 提交于 2020-06-17 12:56:18
问题 I try to use MILP (Mixed Integer Linear Programming) to calculate the unit commitment problem. (unit commitment: An optimization problem trying to find the best scheduling of generator) There are two optimization variables. Generator power : P (continuous variables). Which line segment on cost curve to use : BN (binary variable). ,Used to linearize the quadratic cost function of the generator. Only one line segment can be opened at a time. So there will be a Constraint. Bn1 + Bn2 + Bn3 <=1

No results from apply.paramset if one parameter combination returns nothing

匆匆过客 提交于 2020-06-17 09:38:46
问题 I've been encountering an issue when optimizing a strategy using the apply.paramset function in quantstrat. The issue I am having appears to be the same as the one here: Quantstrat: apply.paramset fails due to combine error for certain paramater distributions, but not others The optimization works well if all of the parameter combinations return at least one transaction, however, if one of the combinations doesn't return a transaction then the results for all of the combinations are lost/NULL

Unable to run CPLEX on Pulp in Python

半世苍凉 提交于 2020-06-17 09:18:46
问题 I am trying to use Pulp to setup my LP model and solve it using CPLEX solver. I have CPLEX installed with license on my laptop but getting the below error : PulpSolverError: PuLP: cannot execute cplex.exe 回答1: Make sure that cplex.exe is in your PATH (see Adding directory to PATH Environment Variable in Windows). Alternately, you can set the path argument to the location of cplex.exe in the CPLEX_CMD constructor (see the source code). 来源: https://stackoverflow.com/questions/51275018/unable-to

efficient loop over numpy array

回眸只為那壹抹淺笑 提交于 2020-06-12 04:56:45
问题 Versions of this question have already been asked but I have not found a satisfactory answer. Problem : given a large numpy vector, find indices of the vector elements which are duplicated (a variation of that could be comparison with tolerance). So the problem is ~O(N^2) and memory bound (at least from the current algorithm point of view). I wonder why whatever I tried Python is 100x or more slower than an equivalent C code. import numpy as np N = 10000 vect = np.arange(float(N)) vect[N/2] =

Why does JDK use shifting instead of multiply/divide?

為{幸葍}努か 提交于 2020-06-11 20:57:49
问题 I have the following question: If asked whether to use a shift vs a multiply or divide for example the answer would be, let the JVM optimize. Example here: is-shifting-bits-faster-than-multiplying Now I was looking at the jdk source, for example Priority Queue and the code uses only shifting for both multiplication and division (signed and unsigned). Taking for granted that the post in SO is the valid answer I was wondering why in jdk they prefer to do it by shifting? Is it some subtle detail

Why can the compiler not optimize floating point addition with 0? [duplicate]

眉间皱痕 提交于 2020-06-10 02:25:12
问题 This question already has answers here : Why does MSVS not optimize away +0? (2 answers) Closed 8 days ago . I have four identity functions which do essentially nothing. Only multiplication with 1 could be optimized by clang to a single ret statement. float id0(float x) { return x + 1 - 1; } float id1(float x) { return x + 0; } float id2(float x) { return x * 2 / 2; } float id3(float x) { return x * 1; } And the following compiler output is: (clang 10, at -O3) .LCPI0_0: .long 1065353216 #