indexing

KeyError: 0 when accessing value in pandas series

江枫思渺然 提交于 2019-12-30 18:26:52
问题 In my script I have df['Time'] as shown below. 497 2017-08-06 11:00:00 548 2017-08-08 15:00:00 580 2017-08-10 04:00:00 646 2017-08-12 23:00:00 Name: Time, dtype: datetime64[ns] But when i do t1=pd.Timestamp(df['Time'][0]) I get an error like this : KeyError: 0 Do I need any type conversion here, if yes then how it can be fixed? 回答1: You're looking for df.iloc . df['Time'].iloc[0] df['Time'][0] would've worked if your series had an index beginning from 0 And if need scalar only use Series.iat:

KeyError: 0 when accessing value in pandas series

天涯浪子 提交于 2019-12-30 18:26:34
问题 In my script I have df['Time'] as shown below. 497 2017-08-06 11:00:00 548 2017-08-08 15:00:00 580 2017-08-10 04:00:00 646 2017-08-12 23:00:00 Name: Time, dtype: datetime64[ns] But when i do t1=pd.Timestamp(df['Time'][0]) I get an error like this : KeyError: 0 Do I need any type conversion here, if yes then how it can be fixed? 回答1: You're looking for df.iloc . df['Time'].iloc[0] df['Time'][0] would've worked if your series had an index beginning from 0 And if need scalar only use Series.iat:

MongoDb TTL on nested document is possible?

时光毁灭记忆、已成空白 提交于 2019-12-30 17:16:31
问题 I want to know if it's possible to use TTL on nested documents. Scenario I have Account and inside I have Sessions . Sessions need to expire in 30 minutes. I've set everything up but obviously when I set TTL index on Account.Sessions.EndDateTime it removes the whole Account . Can I make sure it removes only Session ? This is what it looks like in database. Notice how it will delete whole Account and not only Session when EndDateTime will come. { "_id" : ObjectId("53af273888dba003f429540b"),

Best approach for doing full-text search with list-of-integers documents

时光怂恿深爱的人放手 提交于 2019-12-30 13:31:57
问题 I'm working on a C++/Qt image retrieval system based on similarity that works as follows (I'll try to avoid irrelevant or off-topic details): I take a collection of images and build an index from them using OpenCV functions. After that, for each image, I get a list of integer values representing important "classes" that each image belongs to. The more integers two images have in common, the more similar they are believed to be. So, when I want to query the system, I just have to compute the

Logical indexing in Numpy with two indices as in MATLAB

▼魔方 西西 提交于 2019-12-30 11:41:51
问题 How do I replicate this indexing done in MATLAB with Numpy? X=magic(5); M=[0,0,1,2,1]; X(M==0,M==2) that returns: ans = 8 14 I've found that doing this in Numpy is not correct, since it does not give me the same results.. X = np.matrix([[17, 24, 1, 8, 15], [23, 5, 7, 14, 16], [ 4, 6, 13, 20, 22], [10, 12, 19, 21, 3], [11, 18, 25, 2, 9]]) M=array([0,0,1,2,1]) X.take([M==0]).take([M==2], axis=1) since I get: matrix([[24, 24, 24, 24, 24]]) What is the correct way to logically index with two

Why does MySQL not always use index merge here?

我只是一个虾纸丫 提交于 2019-12-30 10:26:10
问题 Consider this table: CREATE TABLE `Alarms` ( `AlarmId` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `DeviceId` BINARY(16) NOT NULL, `Code` BIGINT(20) UNSIGNED NOT NULL, `Ended` TINYINT(1) NOT NULL DEFAULT '0', `NaturalEnd` TINYINT(1) NOT NULL DEFAULT '0', `Pinned` TINYINT(1) NOT NULL DEFAULT '0', `Acknowledged` TINYINT(1) NOT NULL DEFAULT '0', `StartedAt` TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00', `EndedAt` TIMESTAMP NULL DEFAULT NULL, `MarkedForDeletion` TINYINT(1) NOT NULL DEFAULT '0',

Why does Lucene cause OOM when indexing large files?

江枫思渺然 提交于 2019-12-30 09:42:32
问题 I’m working with Lucene 2.4.0 and the JVM (JDK 1.6.0_07). I’m consistently receiving OutOfMemoryError: Java heap space , when trying to index large text files. Example 1: Indexing a 5 MB text file runs out of memory with a 64 MB max. heap size. So I increased the max. heap size to 512 MB. This worked for the 5 MB text file, but Lucene still used 84 MB of heap space to do this. Why so much? The class FreqProxTermsWriterPerField appears to be the biggest memory consumer by far according to

Why does Lucene cause OOM when indexing large files?

走远了吗. 提交于 2019-12-30 09:42:08
问题 I’m working with Lucene 2.4.0 and the JVM (JDK 1.6.0_07). I’m consistently receiving OutOfMemoryError: Java heap space , when trying to index large text files. Example 1: Indexing a 5 MB text file runs out of memory with a 64 MB max. heap size. So I increased the max. heap size to 512 MB. This worked for the 5 MB text file, but Lucene still used 84 MB of heap space to do this. Why so much? The class FreqProxTermsWriterPerField appears to be the biggest memory consumer by far according to

MongoDB regular expression with indexed field

笑着哭i 提交于 2019-12-30 07:52:14
问题 I was creating my first app using MongoDB. Created index for a field, and tried a find query with $regex param, launched in a shell > db.foo.find({A:{$regex:'BLABLA!25500[0-9]'}}).explain() { "cursor" : "BtreeCursor A_1 multi", "nscanned" : 500001, "nscannedObjects" : 10, "n" : 10, "millis" : 956, "nYields" : 0, "nChunkSkips" : 0, "isMultiKey" : false, "indexOnly" : false, "indexBounds" : { "A" : [ [ "", { } ], [ /BLABLA!25500[0-9]/, /BLABLA!25500[0-9]/ ] ] } } It's very strange, because when

Why does MySQL use the wrong index?

点点圈 提交于 2019-12-30 07:32:30
问题 I have another question regarding optimizing mysql indices for our prioritizing jBPM. The relevant indices look like this: | JBPM_TIMER | 1 | JBPM_TIMER_REVERSEPRIORITY__DUEDATE_ | 1 | REVERSEPRIORITY_ | A | 17 | NULL | NULL | YES | BTREE | | | JBPM_TIMER | 1 | JBPM_TIMER_REVERSEPRIORITY__DUEDATE_ | 2 | DUEDATE_ | A | 971894 | NULL | NULL | YES | BTREE | | | JBPM_TIMER | 1 | JBPM_TIMER_DUEDATE_ | 1 | DUEDATE_ | A | 971894 | NULL | NULL | YES | BTREE | | JBPM asks two questions when retrieving