optimization

Python GEKKO MINLP optimization of energy system: How to build intermediates that are 2D arrays

落爺英雄遲暮 提交于 2021-02-20 15:22:16
问题 I am currently implementing a MINLP optimization problem in Python GEKKO for determining the optimal operational strategy of a trigeneration energy system. As I consider the energy demand during all periods of different representative days as input data, basically all my decision variables, intermediates, etc. are 2D arrays. I suspect that the declaration of the 2D intermediates is my problem. Right now I used list comprehension to declare 2D intermediates, but it seems like python cannot use

Checking the gradient when doing gradient descent

允我心安 提交于 2021-02-19 08:22:32
问题 I'm trying to implement a feed-forward backpropagating autoencoder (training with gradient descent) and wanted to verify that I'm calculating the gradient correctly. This tutorial suggests calculating the derivative of each parameter one at a time: grad_i(theta) = (J(theta_i+epsilon) - J(theta_i-epsilon)) / (2*epsilon) . I've written a sample piece of code in Matlab to do just this, but without much luck -- the differences between the gradient calculated from the derivative and the gradient

Checking the gradient when doing gradient descent

血红的双手。 提交于 2021-02-19 08:22:12
问题 I'm trying to implement a feed-forward backpropagating autoencoder (training with gradient descent) and wanted to verify that I'm calculating the gradient correctly. This tutorial suggests calculating the derivative of each parameter one at a time: grad_i(theta) = (J(theta_i+epsilon) - J(theta_i-epsilon)) / (2*epsilon) . I've written a sample piece of code in Matlab to do just this, but without much luck -- the differences between the gradient calculated from the derivative and the gradient

Why (ftruncate+mmap+memcpy) is faster than (write)?

自闭症网瘾萝莉.ら 提交于 2021-02-19 08:13:13
问题 I found a different way to write data, which is faster than normal unix write function. Firstly, ftruncate the file to the length we need, then mmap this block of file, finally, using memcpy to flush the file content. I will give the example code below. As I known, mmap can load the file into the process address space, accelerating by ignoring the page cache. BUT, I don't have any idea why it can fast up the writing speed. Whether I write a wrong test case or it can be a kind of opti trick?

Quickly find subset of list of lists with greatest total distinct elements

╄→гoц情女王★ 提交于 2021-02-19 06:14:48
问题 Given a list of lists of tuples, I would like to find the subset of lists which maximize the number of distinct integer values without any integer being repeated. The list looks something like this: x = [ [(1,2,3), (8,9,10), (15,16)], [(2,3), (10,11)], [(9,10,11), (17,18,19), (20,21,22)], [(4,5), (11,12,13), (18,19,20)] ] The internal tuples are always sequential --> (1,2,3) or (15,16), but they may be of any length. In this case, the expected return would be: maximized_list = [ [(1, 2, 3),

Quickly find subset of list of lists with greatest total distinct elements

自闭症网瘾萝莉.ら 提交于 2021-02-19 06:13:19
问题 Given a list of lists of tuples, I would like to find the subset of lists which maximize the number of distinct integer values without any integer being repeated. The list looks something like this: x = [ [(1,2,3), (8,9,10), (15,16)], [(2,3), (10,11)], [(9,10,11), (17,18,19), (20,21,22)], [(4,5), (11,12,13), (18,19,20)] ] The internal tuples are always sequential --> (1,2,3) or (15,16), but they may be of any length. In this case, the expected return would be: maximized_list = [ [(1, 2, 3),

Quickly find subset of list of lists with greatest total distinct elements

安稳与你 提交于 2021-02-19 06:12:05
问题 Given a list of lists of tuples, I would like to find the subset of lists which maximize the number of distinct integer values without any integer being repeated. The list looks something like this: x = [ [(1,2,3), (8,9,10), (15,16)], [(2,3), (10,11)], [(9,10,11), (17,18,19), (20,21,22)], [(4,5), (11,12,13), (18,19,20)] ] The internal tuples are always sequential --> (1,2,3) or (15,16), but they may be of any length. In this case, the expected return would be: maximized_list = [ [(1, 2, 3),

Eliminating instantiation of useless destructor calls?

浪尽此生 提交于 2021-02-19 04:08:35
问题 Well, my colleague is pretty in depth nitpicking about eliminating unnecessarily code instantiations for destructor functions. Still same situation, as mentioned in this question: Very limited space for .text section (less 256 KB) Code base should scale among several targets, including the most limited ones Well known use cases of the code base by means some destructor logic is neccesary to manage object lifetimes or not (for many cases life-time of objects is infinite, unless the hardware is

Eliminating instantiation of useless destructor calls?

时间秒杀一切 提交于 2021-02-19 04:06:08
问题 Well, my colleague is pretty in depth nitpicking about eliminating unnecessarily code instantiations for destructor functions. Still same situation, as mentioned in this question: Very limited space for .text section (less 256 KB) Code base should scale among several targets, including the most limited ones Well known use cases of the code base by means some destructor logic is neccesary to manage object lifetimes or not (for many cases life-time of objects is infinite, unless the hardware is

MATLAB fast (componentwise) vector operations are…really fast

血红的双手。 提交于 2021-02-19 02:53:29
问题 I am writing MATLAB scripts since some time and, still, I do not understand how it works "under the hood". Consider the following script, that do some computation using (big) vectors in three different ways: MATLAB vector operations; Simple for cycle that do the same computation component-wise; An optimized cycle that is supposed to be faster than 2. since avoid some allocation and some assignment. Here is the code: N = 10000000; A = linspace(0,100,N); B = linspace(-100,100,N); C = linspace(0