duplicates

Why is Safari duplicating GET request but Chrome is not?

喜夏-厌秋 提交于 2019-12-05 21:26:10
Update TL;DR : This is potentially a bug in Safari and/or Webkit. Longer TL;DR : In Safari, after the Fetch API is used to make a GET request, Safari will automatically (and unintentionally) re-run the the request when the page is reloaded even if the code that makes the request is removed . Newly discovered minimal reproducible code (courtesy of Kaiido below): Front end <script>fetch('/url')</script> Original Post I have a javascript web application which uses the fetch API to make a GET request on a Node.js (express) server. In Safari (where the problem is): The request completes as expected

Kendo UI Grid Inserts/Updates create Duplicate Records (again)

痞子三分冷 提交于 2019-12-05 21:19:38
I have same problem as Daniel had in this topic, but his solution doesn't work for me: http://www.kendoui.com/forums/ui/grid/kendo-ui-grid-inserts-updates-create-duplicate-records.aspx#-jhxqRrNAUGsTFJaC-Ojwg So use-case. Users adds 2 new records one after another: Presses "Add new record" button of a grid Fills the fields (name="Alex", amount=10, comment="first"). Record one is ready. Press 'Save'. (data goes to controller and than to Database) User see one record in a grid Press "Add new record" button again Fills fields (name="Bob", amount=20, comment = "second"). Record one is ready. Press

Convert string to array at different character occurence

会有一股神秘感。 提交于 2019-12-05 20:54:27
Consider I have this string 'aaaabbbaaaaaabbbb' I want to convert this to array so that I get the following result $array = [ 'aaaa', 'bbb', 'aaaaaa', 'bbbb' ] How to go about this in PHP? I have written a one-liner using only preg_split() that generates the expected result with no wasted memory (no array bloat): Code ( Demo ): $string='aaaabbbaaaaaabbbb'; var_export(preg_split('/(.)\1*\K/',$string,NULL,PREG_SPLIT_NO_EMPTY)); Output: array ( 0 => 'aaaa', 1 => 'bbb', 2 => 'aaaaaa', 3 => 'bbbb', ) Pattern: (.) #match any single character \1* #match the same character zero or more times \K #keep

XSLT: find duplicates within each child

早过忘川 提交于 2019-12-05 20:06:18
I'm new to XSLT/XML. I have an XML file similar to this: <event> <division name="Div1"> <team name="Team1"> <player firstname="A" lastname="F" /> <player firstname="B" lastname="G" /> <player firstname="C" lastname="H" /> <player firstname="D" lastname="G" /> </team> <team name="Team2"> <player firstname="A" lastname="F" /> <player firstname="B" lastname="G" /> <player firstname="C" lastname="H" /> <player firstname="D" lastname="I" /> </team> </division> </event> I'm trying to write a XSL Transformation (to use with xsltproc) to give me the names of players with the same lastname within the

SQL Import skip duplicates

柔情痞子 提交于 2019-12-05 19:43:49
I am trying to do a bulk upload into a SQL server DB. The source file has duplicates which I want to remove, so I was hoping that the operation would automatically upload the first one, then discard the rest. (I've set a unique key constraint). Problem is, the moment a duplicate upload is attempted the whole thing fails and gets rolled back. Is there any way I can just tell SQL to keep going? Try to bulk insert the data to the temporary table and then SELECT DISTINCT as @madcolor suggested or INSERT INTO yourTable SELECT * FROM #tempTable tt WHERE NOT EXISTS (SELECT 1 FROM youTable yt WHERE yt

find the duplicate word from a sentence with count using for loop

大兔子大兔子 提交于 2019-12-05 18:21:02
As i am new to java i got a task to find duplicate word only and its count. i stuck in a place and i am unable to get the appropriate output. I can not use any collections and built in tool. i tried the below code. Need some help, Please help me out. public class RepeatedWord { public static void main(String[] args) { String sen = "hi hello hi good morning hello"; String word[] = sen.split(" "); int count=0; for( int i=0;i<word.length;i++) { for( int j=0;j<word.length;j++) { if(word[i].equals(word[j])) { count++; } if(count>1) System.out.println("the word "+word[i]+" occured"+ count+" time");

local postgres db keeps giving error duplicate key value violates unique constraint

人盡茶涼 提交于 2019-12-05 18:02:40
I don't understand why postgres is raising: duplicate key value violates unique constraint I went to check the table in pgadmin to see if the table really did have a duplicate and see: Running VACUUM recommended The estimated rowcount on the table deviates significantly from the actual rowcount. Why is this happening? Luckily it doesn't seem to happen in production on heroku. It's a rails app. Update: Here is the sql log: SQL (2.6ms) INSERT INTO "favorites" ("artist_id", "author_id", "created_at", "post_id", "updated_at") VALUES ($1, $2, $3, $4, $5) RETURNING "id" [["artist_id", 17], ["author

algorithm to find duplicates

丶灬走出姿态 提交于 2019-12-05 18:02:30
Are there any famous algorithms to efficiently find duplicates? For e.g. Suppose if I have thousands of photos and the photos are named with unique names. There could be chances that duplicate could exist in different sub-folders. Is using std::map or any other hash-maps is a good idea? If your dealing with files, one idea is to first verify the file's lenght, and then generate a hash just for the files that have the same size. Then just compare the file's hashes. If they're the same, you've got a duplicate file. There's a tradeoff between safety and accuracy: there might happen, who knows, to

Efficiently delete arrays that are close from each other given a threshold in python

时间秒杀一切 提交于 2019-12-05 17:56:16
I am using python for this job and being very objective here, I want to find a 'pythonic' way to remove from an array of arrays the "duplicates" that are close each other from a threshold. For example, give this array: [[ 5.024, 1.559, 0.281], [ 6.198, 4.827, 1.653], [ 6.199, 4.828, 1.653]] observe that [ 6.198, 4.827, 1.653] and [ 6.199, 4.828, 1.653] are really close to each other, their Euclidian distance is 0.0014 , so they are almost "duplicates", I want my final output to be just: [[ 5.024, 1.559, 0.281], [ 6.198, 4.827, 1.653]] The algorithm that I have right now is: to_delete = []; for

Find duplicates for several columns exclusive ID-column

*爱你&永不变心* 提交于 2019-12-05 16:24:54
i've found a lot of answers on how to find duplicates including the PK-column or without focus on it as this: If you have a table called T1, and the columns are c1, c2 and c3 then this query would show you the duplicate values. SELECT C1, C2, C3, count(*)as DupCount from T1 GROUP BY C1, C2, C3 HAVING COUNT(*) > 1 But a more common requirement would be to get the ID of the all duplicates that have equal c1,c2,c3 values. So i need following what doesn't work because the id must be aggregated: SELECT ID from T1 GROUP BY C1, C2, C3 HAVING COUNT(*) <> 1 (The ID of all duplicates must be different