matching

How to save both matching and non-matching from grep

天大地大妈咪最大 提交于 2019-12-01 06:59:36
I use grep very often and am familiar with it's ability to return matching lines (by default) and non-matching lines (using the -v parameter). However, I want to be able to grep a file once to separate matching and non-matching lines. If this is not possible, please let me know. I realize I could do this easily in perl or awk, but am curious if it is possible with grep. Thanks! If it does NOT have to be grep - this is a single pass split based on a pattern -- pattern found > file1 pattern not found > file2 awk '/pattern/ {print $0 > "file1"; next}{print $0 > "file2"}' inputfile I had the exact

Matching two very very large vectors with tolerance (fast! but working space sparing)

戏子无情 提交于 2019-12-01 06:39:10
问题 consider I have two vectors. One is a reference vector/list that includes all values of interest and one samplevector that could contain any possible value. Now I want to find matches of my sample inside the reference list with a certain tolerance which is not fixed and depentent on the comparing values inside the vectors: matches: abs(((referencelist - sample[i])/sample[i])*10^6)) < 0.5 rounding both vectors is no option! for example consider: referencelist <- read.table(header=TRUE, text=

Java stream - purpose of having both anyMatch and noneMatch operations?

自古美人都是妖i 提交于 2019-12-01 06:37:10
The anyMatch operation will return true if it finds an element - the noneMatch operation will return false it if finds a matching element. The anyMatch operation will return false if it finds no matching element - the noneMatch operation will return true if finds no matching element. Therefore, instead of having both of these operations, could we not just do with one, or am I missing something? In essence, anyMatch returning false is a way of evaluating the truth of noneMatch's predicate. Same reason you have a != b , instead of only supporting ! (a == b) : Easy of use. Clarity of purpose. Yes

Java stream - purpose of having both anyMatch and noneMatch operations?

梦想与她 提交于 2019-12-01 05:03:10
问题 The anyMatch operation will return true if it finds an element - the noneMatch operation will return false it if finds a matching element. The anyMatch operation will return false if it finds no matching element - the noneMatch operation will return true if finds no matching element. Therefore, instead of having both of these operations, could we not just do with one, or am I missing something? In essence, anyMatch returning false is a way of evaluating the truth of noneMatch's predicate. 回答1

Emacs matching tags highlighting

只愿长相守 提交于 2019-12-01 03:20:59
When Paren Match Highlighting (in the Options menu) is enabled, it nicely highlights matched parentheses. Is there something like this but for XML tags? For example, if I had: <para> lksjdflksdjfksdjf </para> it would highlight both tags if my point was anywhere inside one of the tags (even including the less-than and greater-than signs). Thanks for help! Luke Girvin Mike Spindel has written a minor mode, hl-tags-mode , which provides this feature. 来源: https://stackoverflow.com/questions/7784334/emacs-matching-tags-highlighting

Extracting URL link using regular expression re - string matching - Python

我与影子孤独终老i 提交于 2019-12-01 01:04:44
I've been trying to extract URLs from a text file using re api. any link that starts with http:// , https:// and www. the file contains texts as well as html source code, html part is easy because i can extract them using BeautifulSoup, but normal text seems to be more challenging. I found this online which seems to be the best implementation of URL extraction however it fails on certain tags, specially it can't handle tags and includes them in the URL. any help is appreciated, because I'm not familiar with string matching at all myself here is the signature sp1=re.findall("http[s]?://(?:[a-zA

Word matching in SQL Server

风流意气都作罢 提交于 2019-11-30 21:47:36
I have a requirement to provide a suggested match between data in two database tables. The basic requirement is; - A "match" should be suggested for the highest number of matched words (irrespective of order) between the two columns in question. For example, given the data; Table A Table B 1,'What other text in here' 5,'Other text in here' 2,'What am I doing here' 6,'I am doing what here' 3,'I need to find another job' 7,'Purple unicorns' 4,'Other text in here' 8,'What are you doing in here' Ideally, my desired matches would look as follows; 1 -> 8 (3 words matched) 2 -> 6 (5 words matched) 3

Matching ORB Features with a threshold

谁都会走 提交于 2019-11-30 21:20:31
问题 My project is herbs recognition based on android. I use ORB to get keypoints, features, and matching the features. I want to use this algorithm: I use 4 reference image, and matching their features image1 to image1, 1-2, 1-3, 1-4, 2-3, 3,4. Then I store the minimum and maximum distance to database as a threshold. (minimum threshold = total minimum/6) When I recognize the new image, I compare that new minimum and maximum distance with in database. But I don't know how to do that. { for (j

R - Assign column value based on closest match in second data frame

你。 提交于 2019-11-30 20:54:44
I have two data frames, logger and df (times are numeric): logger <- data.frame( time = c(1280248354:1280248413), temp = runif(60,min=18,max=24.5) ) df <- data.frame( obs = c(1:10), time = runif(10,min=1280248354,max=1280248413), temp = NA ) I would like to search logger$time for the closest match to each row in df$time, and assign the associated logger$temp to df$temp. So far, I have been successful using the following loop: for (i in 1:length(df$time)){ closestto<-which.min(abs((logger$time) - (df$time[i]))) df$temp[i]<-logger$temp[closestto] } However, I now have large data frames (logger

Extracting URL link using regular expression re - string matching - Python

旧城冷巷雨未停 提交于 2019-11-30 19:46:04
问题 I've been trying to extract URLs from a text file using re api. any link that starts with http:// , https:// and www. the file contains texts as well as html source code, html part is easy because i can extract them using BeautifulSoup, but normal text seems to be more challenging. I found this online which seems to be the best implementation of URL extraction however it fails on certain tags, specially it can't handle tags and includes them in the URL. any help is appreciated, because I'm