indexing

np.delete and np.s_. What's so special about np_s?

孤者浪人 提交于 2020-01-22 09:39:27
问题 I don't really understand why regular indexing can't be used for np.delete. What makes np.s_ so special? For example with this code, used to delete the some of the rows of this array.. inlet_names = np.delete(inlet_names, np.s_[1:9], axis = 0) Why can't I simply use regular indexing and do.. inlet_names = np.delete(inlet_names, [1:9], axis = 0) or inlet_names = np.delete(inlet_names, inlet_names[1:9], axis = 0) From what I can gather, np.s_ is the same as np.index_exp except it doesn't return

np.delete and np.s_. What's so special about np_s?

蹲街弑〆低调 提交于 2020-01-22 09:38:06
问题 I don't really understand why regular indexing can't be used for np.delete. What makes np.s_ so special? For example with this code, used to delete the some of the rows of this array.. inlet_names = np.delete(inlet_names, np.s_[1:9], axis = 0) Why can't I simply use regular indexing and do.. inlet_names = np.delete(inlet_names, [1:9], axis = 0) or inlet_names = np.delete(inlet_names, inlet_names[1:9], axis = 0) From what I can gather, np.s_ is the same as np.index_exp except it doesn't return

Pandas: Get duplicated indexes

北战南征 提交于 2020-01-22 05:38:22
问题 Given a dataframe, I want to get the duplicated indexes, which do not have duplicate values in the columns, and see which values are different. Specifically, I have this dataframe: import pandas as pd wget https://www.dropbox.com/s/vmimze2g4lt4ud3/alt_exon_repeatmasker_intersect.bed alt_exon_repeatmasker = pd.read_table('alt_exon_repeatmasker_intersect.bed', header=None, index_col=3) In [74]: alt_exon_repeatmasker.index.is_unique Out[74]: False And some of the indexes have duplicate values in

String index out of range: n

安稳与你 提交于 2020-01-22 02:49:08
问题 Im having a bit of a problem with this code each time i execute it it gives me an error String index out of range: 'n' n - is the no. of characters that is entered in the textbox pertaining to this code... (that is textbox - t2.)it is stuck at that first textbox checking it does not go over to the next as mentioned in the array. Object c1[] = { t2.getText(), t3.getText(), t4.getText() }; String b; String f; int counter = 0; int d; for(int i =0;i<=2;i++) { b = c1[i].toString(); for(int j=0;j<

Optimizing a simple mysql select on a large table (75M+ rows)

梦想与她 提交于 2020-01-22 02:19:08
问题 I have a statistics table which grows at a large rate (around 25M rows/day) that I'd like to optimize for selects, the table fits in memory, and the server has plenty of spare memory (32G, table is 4G). My simple roll-up query is: EXPLAIN select FROM_UNIXTIME(FLOOR(endtime/3600)*3600) as ts,sum(numevent1) as success , sum(numevent2) as failure from stats where endtime > UNIX_TIMESTAMP()-3600*96 group by ts order by ts; +----+-------------+--------------+------+---------------+------+---------

Can SOLR perform an UPSERT?

。_饼干妹妹 提交于 2020-01-21 13:39:46
问题 I've been attempting to do the equivalent of an UPSERT (insert or update if already exists) in solr. I only know what does not work and the solr/lucene documentation I have read has not been helpful. Here's what I have tried: curl 'localhost:8983/solr/update?commit=true' -H 'Content-type:application/json' -d '[{"id":"1","name":{"set":"steve"}}]' {"responseHeader":{"status":409,"QTime":2},"error":{"msg":"Document not found for update. id=1","code":409}} I do up to 50 updates in one request and

Can SOLR perform an UPSERT?

帅比萌擦擦* 提交于 2020-01-21 13:37:40
问题 I've been attempting to do the equivalent of an UPSERT (insert or update if already exists) in solr. I only know what does not work and the solr/lucene documentation I have read has not been helpful. Here's what I have tried: curl 'localhost:8983/solr/update?commit=true' -H 'Content-type:application/json' -d '[{"id":"1","name":{"set":"steve"}}]' {"responseHeader":{"status":409,"QTime":2},"error":{"msg":"Document not found for update. id=1","code":409}} I do up to 50 updates in one request and

How to get last indexed record in Solr?

孤街浪徒 提交于 2020-01-21 03:07:35
问题 I want to know how to get/search last indexed record in Apache Solr..? When the existing record is updated then it goes to end of all the records...so I want to get that last indexed record. thanks.. 回答1: You could add a 'timestamp' field to your Solr schema that puts the current date/time into the record when it is added. <field name="timestamp" type="date" indexed="true" stored="true" default="NOW" multiValued="false"/> Then, do a sort in descending order by this field and the first record

Is there C# support for an index-based sort?

穿精又带淫゛_ 提交于 2020-01-20 05:55:30
问题 Is there any built-in C# support for doing an index sort? More Details: I have several sets of data stored in individual generic Lists of double. These are lists always equal in length, and hold corresponding data items, but these lists come and go dynamically, so I can't just store corresponding data items in a class or struct cleanly. (I'm also dealing with some legacy issues.) I need to be able to sort these keyed from any one of the data sets. My thought of the best way to do this is to

How to Optimize the Use of the “OR” Clause When Used with Parameters (SQL Server 2008)

做~自己de王妃 提交于 2020-01-20 04:53:10
问题 I wonder if there is any wise way to rewrite the following query so that the indexes on columns get used by optimizer? CREATE PROCEDURE select_Proc1 @Key1 int=0, @Key2 int=0 AS BEGIN SELECT key3 FROM Or_Table WHERE (@key1 = 0 OR Key1 = @Key1) AND (@key2 = 0 OR Key2 = @Key2) END GO According to this article How to Optimize the Use of the "OR" Clause When Used with Parameters by Preethiviraj Kulasingham: Even though columns in the WHERE clauses are covered by indexes, SQL Server is unable to