duplicates

Django Inline for ManyToMany generate duplicate queries

醉酒当歌 提交于 2019-12-06 08:44:25
问题 I'm experiencing some major performing issue with my django admin. Lots of duplicate queries based on how many inlines that I have. models.py class Setting(models.Model): name = models.CharField(max_length=50, unique=True) class Meta: ordering = ('name',) def __str__(self): return self.name class DisplayedGroup(models.Model): name = models.CharField(max_length=30, unique=True) position = models.PositiveSmallIntegerField(default=100) class Meta: ordering = ('priority',) def __str__(self):

Insert New Row in Table 3 if combination of Col A and Col B in Table C Don't Exist

a 夏天 提交于 2019-12-06 07:58:29
I have a code that gets data from Tables 1 and 2, and then inserts new rows into Table 3. My problem is that the code adds records that already exist. How can I prevent duplicate errors from being inserted, when the combination of groupid and userid in Table C already exists? INSERT INTO mdl_groups_members (groupid,userid) SELECT l.mgroup AS moodle, r.id AS mdl_user FROM moodle AS l JOIN mdl_user AS r ON l.orders_id = r.id WHERE l.mgroup > 0 Here's the table before I ran the script: id groupid userid timeadded 1 1 1 1372631339 2 4 2 1372689032 3 8 3 1373514395 4 3 4 1373514395 Here's the table

Proguard problems with apk creation

此生再无相见时 提交于 2019-12-06 07:32:15
OK This is driving me nuts since a day. I am mainly an iOS guy so i dont know much about Proguard and stuff. I have made an Android app which includes both dropbox and Google Drive API. The app is working great if i deploy it on a phone thru Eclipse but I am getting a nasty error on Console when i try to export the app for apk file generation. My project.properties files was like so... # To enable ProGuard to shrink and obfuscate your code, uncomment this (available properties: sdk.dir, user.home): proguard.config=${sdk.dir}/tools/proguard/proguard-android.txt:proguard-project.txt:proguard

Remove All Duplicates In A Large Text File

不打扰是莪最后的温柔 提交于 2019-12-06 06:11:40
问题 I am really stumped at this problem and as a result I have stopped working for a while. I work with really large pieces of data. I get approx 200gb of .txt data every week. The data can range up to 500 million lines. A lot of these are duplicate. I would guess only 20gb is unique. I have had several custom programs made including hash remove duplicates, external remove duplicates but none seem to work. The latest one was using a temp database but took several days to remove the data. The

Javascript - Quickly remove duplicates in object array

戏子无情 提交于 2019-12-06 05:55:45
I have 2 arrays with objects in them such as: [{"Start": 1, "End": 2}, {"Start": 4, "End": 9}, {"Start": 12, "End": 16}, ... ] I want to merge the 2 arrays while removing duplicates. Currently, I am doing the following: array1.concat(array2); Then I am doing a nested $.each loop, but as my arrays get larger and larger, this takes O(n^2) time to execute and is not scalable. I presume there is a quicker way to do this, however, all of the examples I have found are working with strings or integers. Any recommended algorithms or methods out there to make this faster? This answer bases on the

R: Make unique the duplicated levels in all factor columns in a data frame

妖精的绣舞 提交于 2019-12-06 05:44:37
For several days already I've been stuck with a problem in R, trying to make duplicate levels in multiple factor columns in data frame unique using a loop. This is part of a larger project. I have more than 200 SPSS data sets where the number of cases vary between 4,000 and 23,000 and the number of variables vary between 120 and 1,200 (an excerpt of one of the SPSS data sets can be found here ). The files contain both numeric and factor variables and many of the factor ones have duplicated levels. I have used read.spss from the foreign package to import them in data frames, keeping the value

Copy/duplicate SQL row with blob/text, How do that?

这一生的挚爱 提交于 2019-12-06 05:33:24
I would like to copy a SQL's row into the same table. But in my table, I've a 'text' column. With this SQL: CREATE TEMPORARY TABLE produit2 ENGINE=MEMORY SELECT * FROM produit WHERE pdt_ID = 'IPSUMS'; UPDATE produit2 SET pdt_ID='ID_TEMP'; INSERT INTO produit SELECT * FROM produit2; DROP TABLE produit2; I get this error : #1163 - The used table type doesn't support BLOB/TEXT columns Here is my table : pdt_ID varchar(6) pdt_nom varchar(130) pdt_stitre varchar(255) pdt_accroche varchar(255) pdt_desc text pdt_img varchar(25) pdt_pdf varchar(10) pdt_garantie varchar(80) edit_ID varchar(7) scat_ID

Remove Duplicate Lines from Text using Java

随声附和 提交于 2019-12-06 05:04:07
问题 I was wondering if anyone has logic in java that removes duplicate lines while maintaining the lines order. I would prefer no regex solution. 回答1: public class UniqueLineReader extends BufferedReader { Set<String> lines = new HashSet<String>(); public UniqueLineReader(Reader arg0) { super(arg0); } @Override public String readLine() throws IOException { String uniqueLine; if (lines.add(uniqueLine = super.readLine())) return uniqueLine; return ""; } //for testing.. public static void main

Creating Multidimensional Nested Array from MySQL Result with Duplicate Values (PHP)

本秂侑毒 提交于 2019-12-06 04:08:10
问题 I currently am pulling menu data out of our database using the PDO fetchAll() function. Doing so puts each row of the query results into an array in the following structure: Array ( [0] => Array ( [MenuId] => mmnlinlm08l6r7e8ju53n1f58 [MenuName] => Main Menu [SectionId] => eq44ip4y7qqexzqd7kjsdwh5p [SubmenuName] => Salads & Appetizers [ItemName] => Tomato Salad [Description] => Cucumbers, peppers, scallions and cured tuna [Price] => $7.00) [1] => Array ( [MenuId] => mmnlinlm08l6r7e8ju53n1f58

Kafka & Flink duplicate messages on restart

淺唱寂寞╮ 提交于 2019-12-06 03:44:21
问题 First of all, this is very similar to Kafka consuming the latest message again when I rerun the Flink consumer, but it's not the same. The answer to that question does NOT appear to solve my problem. If I missed something in that answer, then please rephrase the answer, as I clearly missed something. The problem is the exact same, though -- Flink (the kafka connector) re-runs the last 3-9 messages it saw before it was shut down. My Versions Flink 1.1.2 Kafka 0.9.0.1 Scala 2.11.7 Java 1.8.0_91