optimization

Optimizing an incrementing ASCII decimal counter in video RAM on 7th gen Intel Core

无人久伴 提交于 2020-05-08 09:40:48
问题 I'm trying to optimize the following subroutine for a specific Kaby Lake CPU (i5-7300HQ), ideally to make the code at least 10 times faster compared to its original form. The code runs as a floppy-style bootloader in 16-bit real mode. It displays a ten digit decimal counter on screen, counting 0 - 9999999999 and then halting. I have taken a look at Agner's Optimization Guides for Microarchitecture and Assembly, Instruction Performance Table and Intel's Optimization Reference Manual. Only

How to optimise Apps Script code by using arrays to pull data from Google Sheets and format it?

时光总嘲笑我的痴心妄想 提交于 2020-05-07 07:27:31
问题 I have a script that takes data from a gsheet and replaces placeholders on gdoc. I am looking to optimise the script by using arrays instead. This is a sample of my gsheet (the original gsheet spans 1000+ rows and 15+ columns), Original script: function generategdoc() { SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Sheet1").activate(); var ss = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet(); var lr = ss.getLastRow(); for (var i =2;i<=lr;i++){ if(ss.getRange(i, 1).getValue()){

Calculating NDVI per region, month & year with Google Earth Engine?

给你一囗甜甜゛ 提交于 2020-04-30 10:18:25
问题 I want to calculate the mean NDVI per region (admin level 3, also called woreda), month and year. So my end result would look something like this: regions year month NDVI --------------------------------- region_1 2010 1 0.5 region_1 2010 2 -0.6 region_1 2010 3 0.7 region_1 2010 4 -0.3 region_1 2010 5 0.4 region_1 2010 6 -0.5 region_1 2010 7 0.5 region_1 2010 8 -0.7 region_1 2010 9 0.8 region_1 2010 10 -0.55 region_1 2010 11 -0.3 region_1 2010 12 -0.2 region_2 2010 1 0.5 region_2 2010 2 -0.6

Calculating NDVI per region, month & year with Google Earth Engine?

流过昼夜 提交于 2020-04-30 10:15:21
问题 I want to calculate the mean NDVI per region (admin level 3, also called woreda), month and year. So my end result would look something like this: regions year month NDVI --------------------------------- region_1 2010 1 0.5 region_1 2010 2 -0.6 region_1 2010 3 0.7 region_1 2010 4 -0.3 region_1 2010 5 0.4 region_1 2010 6 -0.5 region_1 2010 7 0.5 region_1 2010 8 -0.7 region_1 2010 9 0.8 region_1 2010 10 -0.55 region_1 2010 11 -0.3 region_1 2010 12 -0.2 region_2 2010 1 0.5 region_2 2010 2 -0.6

How to efficiently wrap the index of a fixed-size circular buffer

风流意气都作罢 提交于 2020-04-29 09:51:16
问题 I have a fixed size circular buffer (implemented as an array): upon initialization, the buffer gets filled with the specified maximum number of elements which allows the use of a single position index in order to keep track of our current position in the circle. What is an efficient way to access an element in the circular buffer? Here is my current solution: int GetElement(int index) { if (index >= buffer_size || index < 0) { // some code to handle the case } else { // wrap the index index =

Micro Optimization of a 4-bucket histogram of a large array or list

旧城冷巷雨未停 提交于 2020-04-25 11:30:27
问题 I have a special question. I will try to describe this as accurate as possible. I am doing a very important "micro-optimization". A loop that runs for days at a time. So if I can cut this loops time it takes to half the time. 10 days would decrease to only 5 days etc. The loop I have now is the function: "testbenchmark1". I have 4 indexes that I need to increase in a loop like this. But when accessing an index from a list that takes some extra time actually as I have noticed. This is what I

Micro Optimization of a 4-bucket histogram of a large array or list

一笑奈何 提交于 2020-04-25 11:28:36
问题 I have a special question. I will try to describe this as accurate as possible. I am doing a very important "micro-optimization". A loop that runs for days at a time. So if I can cut this loops time it takes to half the time. 10 days would decrease to only 5 days etc. The loop I have now is the function: "testbenchmark1". I have 4 indexes that I need to increase in a loop like this. But when accessing an index from a list that takes some extra time actually as I have noticed. This is what I

Micro Optimization of a 4-bucket histogram of a large array or list

被刻印的时光 ゝ 提交于 2020-04-25 11:27:04
问题 I have a special question. I will try to describe this as accurate as possible. I am doing a very important "micro-optimization". A loop that runs for days at a time. So if I can cut this loops time it takes to half the time. 10 days would decrease to only 5 days etc. The loop I have now is the function: "testbenchmark1". I have 4 indexes that I need to increase in a loop like this. But when accessing an index from a list that takes some extra time actually as I have noticed. This is what I

How to automatize constraints definition on PuLP?

天大地大妈咪最大 提交于 2020-04-18 06:12:54
问题 Supply Chain Optimization - PuLP & Python 1. Problem definition I'm trying yo optimize how I define my constraints in the model shown below. Right now, I need to create different constraints for each year because otherwise the algorithm doesn't provide the expected result. I would like to have for "case" constraint a single definition, instead of define one for each year. Where "case" is: model += (((units_ind[str(1), str(manuf), str('B'), str(option)]) + units_ind[str(1), str(manuf), str('V'

Calculate the memory usage and improve computation performance

喜欢而已 提交于 2020-04-18 03:48:50
问题 I would like to optimize this code in order to save memory as much as possible. My questions are: How many bytes are currently occupied for this? How many bytes can be saved by new changes? and what are those changes? The /Ox compiler option enables a combination of optimizations that favor speed. In some versions of the Visual Studio IDE and the compiler help message, this is called full optimization, but the /Ox compiler option enables only a subset of the speed optimization options enabled