algorithm

Efficient algorithm for removing items from an array in place

我只是一个虾纸丫 提交于 2021-01-29 22:10:55
问题 I'm looking for an efficient JavaScript utility method that in O(n) will remove a set of items from an array in place . You can assume equality with the === operator will work correctly. Here is an example signature (written in TypeScript for type clarity) function deleteItemsFromArray<T>(array: T[], itemsToDelete: T[]) { ... } My thought is to do this in two passes. The first pass gets the indexes that need to be removed. The second pass then compacts the array by copying backwards from the

Time efficiency in Kruskal's algorithm using adjacency matrix as data structure

帅比萌擦擦* 提交于 2021-01-29 20:26:46
问题 This is the pseudo code I used for Kruskal's algorithm. The data structure I have used here is an adjacency matrix. I got the order of growth as n^2 . I want to know whether it is correct or not. Kruskal’s Pseudo code 1. Kruskal (n, m, E) 2. // Purpose to compute the minimum spanning tree using Kruskal's algorithm 3. // Inputs 4. n - Number of vertices in the graph 5. m - Number of edges in the graph 6. E - Edge list consisting of set of edges along with equivalent weight w - cost adjacency

Hanoi algoritm first recursion step?

给你一囗甜甜゛ 提交于 2021-01-29 20:13:59
问题 I know this algoritm in theory. Lets assume we have three disks and we want to move disks from A to C rods. First move n-1 disks from A to B using C Then move last big disk to C After all move all disks from B to C using A I dont get why using fist recursion honoi(n, A, C, B) we move all disks from A to C . Lets make iterations: First step: n - 1 -> 2 , it means we get a second disk from stack of disks: Disk 1 Disk 2 -> get this ------- Disk 3 And move it to the empty rod: Disk 1 Destination

Breaking gzipped JSON into chunks of arbitrary size

只谈情不闲聊 提交于 2021-01-29 18:25:36
问题 This is being done in Typescript, but the algorithm is applicable to any language, I think. I am sending log data (parsed AWS ALB logs) to New Relic, and their maximum payload size is 10^6 bytes. What I'm doing right now is encoding the entire ALB log I get from S3 in JSON, gzipping it, and then examining the size via Buffer.byteLength . If it's in excess of 900,000 bytes (I want to leave some headroom, because the gzipped data doesn't exactly scale linearly with the number of log entries) I

Group by java-script Array object

大城市里の小女人 提交于 2021-01-29 17:51:07
问题 I have below array object in JavaScript [ ["English", 52], ["Hindi", 154], ["Hindi", 241], ["Spanish", 10], ["French", 65], ["German", 98], ["Russian", 10] ] What will be the best way to group by array item based on language and their average values in java-script. I am using below code to do grouping. function (Scores) { var map = {}; for (var i = 0; i < Scores.length; i++) { var score = map[Scores[i][0]]; if (score) { score = { 'Sum': score.Sum + Scores[i][1], 'Count': score.Match + 1,

how to write an algorithm for finding specific values of a numpy array in python

痴心易碎 提交于 2021-01-29 17:18:55
问题 I have a serie of lines and want to extract some of them. This is the number of lines: line_no= np.arange (17, 34) These lines are arranged in two perependicular direction. I have shown them with bluw and red lines in the fig. I know where the direction is changing, it is called sep: sep=25 # lines from 17 to 25 are blue and from 26 to end are red Then, I have the number of the points that create the lines. I call them chunks, because each number can be chunk: chunk_val=np.array([1,2,3,3,4])

Time efficiency in Kruskal's algorithm using adjacency matrix as data structure

旧城冷巷雨未停 提交于 2021-01-29 16:58:31
问题 This is the pseudo code I used for Kruskal's algorithm. The data structure I have used here is an adjacency matrix. I got the order of growth as n^2 . I want to know whether it is correct or not. Kruskal’s Pseudo code 1. Kruskal (n, m, E) 2. // Purpose to compute the minimum spanning tree using Kruskal's algorithm 3. // Inputs 4. n - Number of vertices in the graph 5. m - Number of edges in the graph 6. E - Edge list consisting of set of edges along with equivalent weight w - cost adjacency

How to find 2 to the power of an insanely big number modulo 10^9

六眼飞鱼酱① 提交于 2021-01-29 16:23:08
问题 I have an excessively big number (1500+) digits and I need to find 2 ** that number modulo 1_000_000_000 so I wrote this python: n = 1 return_value = 2 while n < To_the_power_of: return_value *= 2 return_value = return_value % 1_000_000_000 n += 1 This returns the correct value for smaller values, but takes too long for bigger values. If the number is modulo 10 then you get this pattern which could be used. 2 ** 1 modulo 10 = 2 2 ** 2 modulo 10 = 4 2 ** 3 modulo 10 = 8 2 ** 4 modulo 10 = 6 2

What is the best algorithm to stop the winning function of connect four to return true again?

我怕爱的太早我们不能终老 提交于 2021-01-29 16:18:38
问题 my winning function works well, but let's say there's four circles in a line and the line got 10 columns when I add the 5th circle the winning function return true again. is there a way to mark the already won circles and how? here's the code: function horizontalWin(row, col) { var playervalue = avatar[turn]; var count = 0; for (var i = 0; i < GRID_SIZE; i++) { var slot = document.getElementById('tbl').rows[row].cells[i].innerHTML; if (slot == playervalue) { count++; if (count >=4) {return

Efficient divide-and-conquer algorithm

社会主义新天地 提交于 2021-01-29 15:57:45
问题 At a political event, introducing 2 people determines if they represent the same party or not. Suppose more than half of the n attendees represent the same party. I'm trying to find an efficient algorithm that will identify the representatives of this party using as few introductions as possible. A brute force solution will be to maintain two pointers over the array of attendees, introducing n attendees to n-1 other attendees in O(n 2 ) time. I can't figure out how to improve on this. Edit: