unique

Unique combinations of all variables

妖精的绣舞 提交于 2019-12-25 11:58:12
问题 I have attempted to use the following code to come up with a table of unique combinations of a bunch of variables. V1=as.vector(CRmarch30[1]) V2=as.vector(CRmarch30[2]) V3=as.vector(CRmarch30[3]) V4=as.vector(CRmarch30[4]) V5=as.vector(CRmarch30[5]) V6=as.vector(CRmarch30[6]) V7=as.vector(CRmarch30[7]) As you may have already guessed, CRmarch30 is a dataframe with 7 columns. I converted each column into a vector. Then, i used the following code to create all unique combination of the 7

Python Pandas: Group by and count distinct value over all columns?

放肆的年华 提交于 2019-12-25 08:34:49
问题 I have df column1 column2 column3 column4 0 name True True NaN 1 name NaN True NaN 2 name1 NaN True True 3 name1 True True True and I would like to Group by and count distinct value over all columnsI am trying : df.groupby('column1').nunique() but I am receiving this error. AttributeError: 'DataFrameGroupBy' object has no attribute 'nunique' Anybody have a suggestion? 回答1: You can use stack for Series and then Series.groupby with SeriesGroupBy.nunique: df1 = df.set_index('column1').stack()

Delete repeated values of different data types from cell array

筅森魡賤 提交于 2019-12-25 07:58:16
问题 I have a list of cell array with many repeated values, which includes strings, sample time, saturation upper limit and lower limit. For example: MyValues={ 'Lookc_at_the_stars' 'Lookc_how_they_shine' 'forc_you' 'andm_everything_they_do' 'Theym_were_all_yellow' 'COLDPLAY_STOP' 'COLDPLAY_PLAY' 'COLDPLAY_PLAY' 'COLDPLAY_PLAY' 'COLDPLAY_BREAK' 'COLDPLAY_BREAK' 'Lookc_How_they_shinefor_you' 'its_true' 'COLDPLAY_STOP' 'COLDPLAY_STOP' } And the output what I require is: NewMyValues = { 'Lookc_at_the

Loop over unique values R

和自甴很熟 提交于 2019-12-25 07:36:19
问题 I previously posted a loop question and am trying another loop with no success. Help with trying to figure this out would be greatly appreciated. As of now, to get my work done I'm going to subset the data by year and run my original function as is, but one of the datasets I'm working with is a long time series. My original function calculates the number of fish at age for a given year dataset. This function works fine. What I would like to do is add a for loop that will allow the function to

Counting unique variables within a unique variables [R]

偶尔善良 提交于 2019-12-25 06:37:00
问题 Suppose this is my data: X Y Z 1 1 2323 1 1 45 1 1 67 1 2 1 1 2 90 1 3 34 1 3 1267 1 3 623 1 4 81 1 4 501 2 1 456 2 1 78 2 2 41 2 2 56 2 3 90 2 3 71 2 4 24 2 4 98 2 5 42 2 5 361 How do I count the values of Z for each unique variable Y for each separate X so that I can get a dataframe that looks like: X Y Z 1 1 2435 1 2 91 1 3 1924 1 4 582 2 1 534 2 2 97 2 3 161 2 4 122 2 5 403 回答1: Assuming that dataframe is named 'dat' then aggregate.formula which is one of the generics of aggregate: >

error when accessing worklight server deployed on tomcat

佐手、 提交于 2019-12-25 04:51:53
问题 I deploy the war file and adapter file to the tomcat,everything is fine,but when I try access the worklight server,the request is [http://10.30.3.11:8080/nantian/apps/services/api/attendance/android/query] and the logcat appear this error [http://10.30.3.11:8080/nantian/apps/services/api/attendance/android/query]failure. state: 500, response: The server was unable to process the request from the application. Please try again later.[http://10.30.3.11:8080/nantian/apps/services/api/attendance

GORM how to ensure uniqueness of related objects property

ε祈祈猫儿з 提交于 2019-12-25 03:57:29
问题 I'm trying to get my head around GORM and relational mapping. The relationships are working fine but there is one problem. I can't seem too ensure that every MailAddress added to MailingList has a unique address. What would be the must efficient way to do this? Note: There is no unique constraint on MailAddress.address . Identical addresses can exist in the same table. class MailAddress { String name String email static belongsTo = MailingList static constraints = { name blank:true email

Find matches between unique pairs of two dataframes and bind values in R

时光怂恿深爱的人放手 提交于 2019-12-25 01:53:09
问题 I have two dataframes dat1 and dat2 and I would like to find matches between the first two columns of the two dataframes and join the values contained in each dataframe for unique pairs. dat1<-data.frame(V1 = c("home","fire","sofa","kitchen"), V2 = c("cat","water","TV","knife"), V3 = c('date1','date2','date3','date4')) V1 V2 V3 1 home cat date1 2 fire water date2 3 sofa TV date3 4 kitchen knife date4 dat2<-data.frame(V1 = c("home","water","sofa","knife"), V2 = c("cat","fire","TV","kitchen"),

Select distinct + select top to merge multiple rows

天大地大妈咪最大 提交于 2019-12-24 22:14:19
问题 I'm trying to select rows from a table, one row per email address, and return one firstname from the top row in the email list. The query, though, returns multiple email addresses. What am I doing wrong? SELECT DISTINCT email, (SELECT TOP 1 firstname FROM onsite_clients_archive oc WHERE oc.client_id=oca.client_id ORDER BY client_id) FROM onsite_clients_archive oca WHERE users_user_id IS NULL 回答1: Your bug is WHERE oc.client_id = oca.client_id should be WHERE oc.email = oca.email . You didn't

python unique list based on item

假如想象 提交于 2019-12-24 20:15:18
问题 I have a list old_list = [ (1, 'AAA', None, 1), (2, 'AAA', 'x', 0), (5, 'AAB', 'z', 1), (6, 'ABB', 'x', 1), (9, 'ABB', 'x', 1)] How I want get a new list have unique i[1] and the bigger id i[0], like this result new_list = [ (2, 'AAA', 'x', 0), (5, 'AAB', 'z', 1), (9, 'ABB', 'x', 1)] ] can someone help me? 回答1: You can use itertools.groupby old_list = [ (1, 'AAA', None, 1), (2, 'AAA', 'x', 0), (5, 'AAB', 'z', 1), (6, 'ABB', 'x', 1), (9, 'ABB', 'x', 1)] from itertools import groupby from