optimization

Python Optimized Most Cosine Similar Vector

北慕城南 提交于 2020-05-26 04:53:46
问题 I have about 30,000 vectors and each vector has about 300 elements. For another vector (with same number elements), how can I efficiently find the most (cosine) similar vector? This following is one implementation using a python loop: from time import time import numpy as np vectors = np.load("np_array_of_about_30000_vectors.npy") target = np.load("single_vector.npy") print vectors.shape, vectors.dtype # (35196, 312) float3 print target.shape, target.dtype # (312,) float32 start_time = time()

Optimizing Python Code [closed]

孤街浪徒 提交于 2020-05-21 07:42:40
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 8 years ago . I've been working on one of the coding challenges on InterviewStreet.com and I've run into a bit of an efficiency problem. Can anyone suggest where I might change the code to make it faster and more efficient? Here's the code Here's the problem statement if you're interested 回答1: If your question is about

Optimizing Python Code [closed]

大兔子大兔子 提交于 2020-05-21 07:42:03
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 8 years ago . I've been working on one of the coding challenges on InterviewStreet.com and I've run into a bit of an efficiency problem. Can anyone suggest where I might change the code to make it faster and more efficient? Here's the code Here's the problem statement if you're interested 回答1: If your question is about

How can i get gcc profile guided optimizations to stop writing files after being 'optimized'?

寵の児 提交于 2020-05-17 06:25:08
问题 I'm doing this in a makefile and it results on the first run creating the .gcda files in that dir; but as soon as i do the second, if find that the executable is almost as slow (and this is surely related), is still writing new files to the dir after compiled. From my understanding this shouldn't occur. Removing -fprofile-arcs (or -lgcov for that matter) makes the second compile complain about missing symbols. What amd i missing? I make clean in between both of these executions btw. I also

Cumulative Result of Matrix Multiplications

随声附和 提交于 2020-05-17 06:15:27
问题 Given a list of nxn matrices, I want to compute the cumulative product of these matrix multiplications - i.e. given matrices M0, M1, ...,Mm I want a result R, where R[0] = M1, R[1] = M0 x M1, R[2]= M0 x M1 x M2 and so on. Obviously, you can do this via for-loops or tail recursion, but I'm coding in python, where that runs at a snails pace. In Code: def matrix_mul_cum_sum(M): #M[1..m] is an m x n x n matrix if len(M) == 0: return [] result = [M[1]] for A in M[1:]: result.append(np.mat_mul

Optimizing performance for ClosedXML loops and row deletion

最后都变了- 提交于 2020-05-17 06:11:51
问题 I'm reading an Excel file and looping through the rows, deleting those that meet a condition using (var wb = new XLWorkbook(path)) { var ws = wb.Worksheet(sheet); int deleted = 0; for (int row_i = 2; row_i <= ws.LastRowUsed().RowNumber(); row_i++) { ExcelRow row = new ExcelRow(ws.Row(row_i-deleted)); row.styleCol = header.styleCol; K key = keyReader(row); if (!writeData(row,dict[key])) deleted++; } wb.Save(); } The code is very slow for a file with thousands of rows, even without deletions,

what is the solution of the following optimization problem with linear and non linear constraints?

眉间皱痕 提交于 2020-05-17 02:57:32
问题 min x subject to: zeta_1>=b zeta_2>=h t*log(1+m_b*zeta_1)>=t_bh (1-t)*log(1+t*m_h*zeta_2)>=t_hb 0<=t<=1 ||y||=1, where zeta_1=(|transpose(a)*y|^2)*x, zeta_2= (|transpose(c)*y|^2)*x. m_b and m_h are parameters. a,c and y are taken from complex numbers and are having dimension N*1. b,h,t_bh and t_hb are constants. I have used the following simulation. tic clc clear all close all %% initialization N=5; alpha=0.5; eta=0.6; sigma_B=10^-8; sigma_H=10^-8; b=0.00001; h=0.00001; h_AB=[]; h_AH=[]; h_BH

Moving Zeros To The End: Failing the test in CodeWars?

北战南征 提交于 2020-05-16 06:58:50
问题 Problem Link to the problem: https://www.codewars.com/kata/52597aa56021e91c93000cb0/train/python Write an algorithm that takes an array and moves all of the zeros to the end, preserving the order of the other elements. move_zeros([false,1,0,1,2,0,1,3,"a"]) # returns[false,1,1,2,1,3,"a",0,0] My code: def move_zeros(array): list = [] list2 = [] for n in array: if n is 0: list2.append(n) else: list.append(n) return list + list2 Sample Tests: Test.describe("Basic tests") Test.assert_equals(move

what makes Jsoup faster than HttpURLConnection & HttpClient in most cases

拜拜、爱过 提交于 2020-05-16 05:58:25
问题 I want to compare performances for the three implementations mentioned in the title, I wrote a little JAVA program to help me doing this. The main method contains three blocks of testing, each block looks like this : nb=0; time=0; for (int i = 0; i < 7; i++) { double v = methodX(url); if(v>0){ nb++; time+=v; } } if(nb==0) nb=1; System.out.println("HttpClient : "+(time/ ((double) nb))+". Tries "+nb+"/7"); Variable nb is used to avoid failed requests. Now method methodX is one of : private

Refactoring middleware code of NodeJS project, using routes, controllers and models

百般思念 提交于 2020-05-15 09:25:47
问题 I currently have difficulties to structure my NodeJS project. I followed several YouTube series, seeing people using different techniques to structure their code. Which structure would you suggest me in my case? What's best practice? I've my app.js which contains connection establishment to MongoDB, initializing express, bodyParser, pug as view engine and finally starting the server. My router.js contains all routes and unfortunately some middleware code, which I want to move into their own