duplicates

Removing duplicate slashes in URL via .htaccess or PHP

允我心安 提交于 2019-12-02 17:49:06
问题 I'm trying to remove duplicate slashes from URLs. The following .htaccess rule: RewriteRule ^(.+)//+(.*)$ $1/$2 [L,NC,R=301] do NOT work for me on a URL such as the following: http://www.mp7.org/?site=69.com\\\\\\\\\\\\\\\\ The .htaccess file #### mod_rewrite in use Options +FollowSymlinks RewriteEngine On 回答1: This rule won't work for backslashes . You must add a similar rule with backslashes RewriteRule ^(.+)\\\\+(.*)$ $1\\$2 [L,R] If you want to replace backslashes with (forward) slashes,

Common elements between two lists with no duplicates

扶醉桌前 提交于 2019-12-02 17:28:31
问题 Problem is this, take two lists, say for example these two: a = [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89] b = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] And write a program that returns a list that contains only the elements that are common between the lists (without duplicates). Make sure your program works on two lists of different sizes. Here's my code: a = [1, 1, 2, 2, 3, 5, 8, 13, 21, 34, 55, 89] b = [1, 2, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] c = [] for i in a: if i in b and i not in c

Getting duplicates with additional information

大兔子大兔子 提交于 2019-12-02 17:26:10
问题 I've inherited a database and I'm having trouble constructing a working SQL query. Suppose this is the data: [Products] | Id | DisplayId | Version | Company | Description | |---- |----------- |---------- |-----------| ----------- | | 1 | 12345 | 0 | 16 | Random | | 2 | 12345 | 0 | 2 | Random 2 | | 3 | AB123 | 0 | 1 | Random 3 | | 4 | 12345 | 1 | 16 | Random 4 | | 5 | 12345 | 1 | 2 | Random 5 | | 6 | AB123 | 0 | 5 | Random 6 | | 7 | 12345 | 2 | 16 | Random 7 | | 8 | XX45 | 0 | 5 | Random 8 | |

counting duplicates in a sorted sequence using command line tools

天涯浪子 提交于 2019-12-02 16:04:10
I have a command (cmd1) that greps through a log file to filter out a set of numbers. The numbers are in random order, so I use sort -gr to get a reverse sorted list of numbers. There may be duplicates within this sorted list. I need to find the count for each unique number in that list. For e.g. if the output of cmd1 is: 100 100 100 99 99 26 25 24 24 I need another command that I can pipe the above output to, so that, I get: 100 3 99 2 26 1 25 1 24 2 Stephen Paul Lesniewski how about; $ echo "100 100 100 99 99 26 25 24 24" \ | tr " " "\n" \ | sort \ | uniq -c \ | sort -k2nr \ | awk '{printf("

Removing duplicate rows in vi?

天大地大妈咪最大 提交于 2019-12-02 14:20:21
I have a text file that contains a long list of entries (one on each line). Some of these are duplicates, and I would like to know if it is possible (and if so, how) to remove any duplicates. I am interested in doing this from within vi/vim, if possible. If you're OK with sorting your file, you can use: :sort u Sean Try this: :%s/^\(.*\)\(\n\1\)\+$/\1/ It searches for any line immediately followed by one or more copies of itself, and replaces it with a single copy. Make a copy of your file though before you try it. It's untested. Kevin From command line just do: sort file | uniq > file.new awk

Select multiple field duplicates from MySQL Database

老子叫甜甜 提交于 2019-12-02 14:06:36
问题 I've got an old forum which contains threads with duplicate first posts (perhaps differing replies). I want to delete all but one of these threads (leaving the thread with the highest view count). I have the following SQL query to help identify duplicate threads, but I can't find a way for it to list only duplicates with the lowest value for the xf_thread.view_count column: SELECT t.thread_id, MIN(t.view_count) FROM xf_thread t INNER JOIN xf_post p ON p.thread_id = t.thread_id WHERE t.first

How can duplicate results in a different order be removed in a Cypher response?

六眼飞鱼酱① 提交于 2019-12-02 13:51:58
问题 I am trying to find all the videos which 2 people commonly liked using the following cypher query MATCH (p1: person)-[:LIKED]->(v)<-[:LIKED]-(p2: person) return p1, p2, v In the output each entry is listed twice, with the values of p1 and p2 being switched. Example: BOB | Mary | Cat video Mary| Bob | Cat video How can such duplicate entries combined into one? 回答1: Here is one way to prevent duplicate results: MATCH (p1: person)-[:LIKED]->(v)<-[:LIKED]-(p2: person) WHERE ID(p1) < ID(p2) RETURN

How to find duplicate records in PostgreSQL

旧巷老猫 提交于 2019-12-02 13:50:01
I have a PostgreSQL database table called "user_links" which currently allows the following duplicate fields: year, user_id, sid, cid The unique constraint is currently the first field called "id", however I am now looking to add a constraint to make sure the year , user_id , sid and cid are all unique but I cannot apply the constraint because duplicate values already exist which violate this constraint. Is there a way to find all duplicates? Marcin Zablocki The basic idea will be using a nested query with count aggregation: select * from yourTable ou where (select count(*) from yourTable inr

Keep only non-duplicate rows based on a Column Value [duplicate]

流过昼夜 提交于 2019-12-02 13:43:28
This question already has an answer here: How can I remove all duplicates so that NONE are left in a data frame? 2 answers This is follow up to a previous question . The dataset looks like the following: dat <- read.table(header=TRUE, text=" ID Veh oct nov dec jan feb 1120 1 7 47 152 259 140 2000 1 5 88 236 251 145 2000 2 14 72 263 331 147 1133 1 6 71 207 290 242 2000 3 7 47 152 259 140 2002 1 5 88 236 251 145 2006 1 14 72 263 331 147 2002 2 6 71 207 290 242 ") dat ID Veh oct nov dec jan feb 1 1120 1 7 47 152 259 140 2 2000 1 5 88 236 251 145 3 2000 2 14 72 263 331 147 4 1133 1 6 71 207 290

Removing duplicates

爱⌒轻易说出口 提交于 2019-12-02 13:22:46
问题 I would like to remove duplicates from my data in my CSV file. The first column is the year, and the second is the sentence. I would like to remove any duplicates of a sentence, regardless of the year information. Is there a command that I can insert in val text = { } to remove these dupes? My script is: val source = CSVFile("science.csv"); val text = { source ~> Column(2) ~> TokenizeWith(tokenizer) ~> TermCounter() ~> TermMinimumDocumentCountFilter(30) ~> TermDynamicStopListFilter(10) ~>