indexing

Store GeoJSON polygons in MongoDB

一曲冷凌霜 提交于 2020-01-12 03:32:09
问题 I have the following problem with MongoDB. I got some geo data from my home country and i have to store them into mongodb to set up a simple Web Feature Service. This service will mostly do bounding box queries using the $within operator. The data is in GeoJSON format. Therefore i imported at first the Villages and Cities which are represented as points ( [1,2] ) in this format. No problem. Next step rivers and streets which are LineStrings and according to GeoJSON represented this way [[1,2]

Get the index (counter) of an 'ng-repeat' item with AngularJS?

假如想象 提交于 2020-01-11 18:18:17
问题 I am using AngularJS and its ng-repeat directive to display a sequence of questions. I need to number each question starting with 1 . How do I display and increment such a counter with ng-repeat ? Here is what I have so far: <ul> <li ng-repeat="question in questions | filter: {questionTypesId: questionType, selected: true}"> <div> <span class="name"> {{ question.questionText }} </span> </div> <ul> <li ng-repeat="answer in question.answers"> <span class="name"> {{answer.selector}}. {{ answer

Index a MySQL database with Apache Lucene, and keep them synchronized

给你一囗甜甜゛ 提交于 2020-01-11 17:24:10
问题 When a new item is added in MySQL, it must be also indexed by Lucene. When an existing item is removed from MySQL, it must be also removed from Lucene's index. The idea is to write a script that will be called every x minutes via a scheduler (e.g. a CRON task). This is a way to keep MySQL and Lucene synchronized. What I managed until yet: For each new added item in MySQL, Lucene indexes it too. For each already added item in MySQL, Lucene does not reindex it (no duplicated items). This is the

How to avoid NaN index values when Importing from .csv w/o headers to a pandas dataframe?

早过忘川 提交于 2020-01-11 13:13:23
问题 I have a .csv file without headers. I want to import it to create a pandas dataframe object. df1 = pd.read_csv(infile, global_sep, header = 0) When doing print df1.head() , I yield the following output: 1491895800000 -64 640 15424 0 1491895799995 -64 640 15424 1 1491895799990 -64 640 15424 2 1491895799985 -64 640 15424 3 1491895799980 -64 640 15424 4 1491895799975 -64 640 15424 Doing df1.reset_index(inplace = True, drop = True), doesn't change the output. How to avoid NaN index values when

R delete rows in data frame where nrow of index is smaller than certain value

痴心易碎 提交于 2020-01-11 11:33:13
问题 I want to delete certain rows in a data frame when the number of rows with the same index is smaller than a pre-specified value. > fof.6.5[1:15, 1:3] draw Fund.ID Firm.ID 1 1 1667 666 2 1 1572 622 3 1 1392 553 4 1 248 80 5 1 3223 332 6 2 2959 1998 7 2 2659 1561 8 2 14233 2517 9 2 10521 12579 10 2 3742 1045 11 3 9093 10121 12 3 15681 21626 13 3 26371 70170 14 4 27633 52720 15 4 13751 656 In this example, I want each index to have 5 rows. The third draw (which is my index) has fewer than 5 rows

Proper way to access latest row for each individual identifier?

可紊 提交于 2020-01-11 10:45:12
问题 I have a table core_message in Postgres, with millions of rows that looks like this (simplified): ┌────────────────┬──────────────────────────┬─────────────────┬───────────┬──────────────────────────────────────────┐ │ Colonne │ Type │ Collationnement │ NULL-able │ Par défaut │ ├────────────────┼──────────────────────────┼─────────────────┼───────────┼──────────────────────────────────────────┤ │ id │ integer │ │ not null │ nextval('core_message_id_seq'::regclass) │ │ mmsi │ integer │ │ not

Adding multiple indexes at same time in MySQL

三世轮回 提交于 2020-01-11 09:12:26
问题 During tests on MySQL, I wanted to add multiple indexes to a table with more than 50 million rows. Does MySQL support adding 2 indexes at the same time for different columns? If yes, Do I need to open 2 sessions or it can be done from one command? 回答1: Yes. But... In older versions, use ALTER TABLE tbl ADD INDEX(...), ADD INDEX(...); so that it will do all the work in one pass. In newer versions, ALGORITHM=INPLACE makes it so that the work can be done in the "background" for InnoDB tables,

Reindexing using NEST V5.4 - ElasticSearch

点点圈 提交于 2020-01-11 07:46:19
问题 I'm quite new to ElasticSearch. I'm trying to reindex a index in order to rename it. I'm using NEST API v5.4. I saw this example: var reindex = elasticClient.Reindex<Customer>(r => r.FromIndex("customers-v1") .ToIndex("customers-v2") .Query(q => q.MatchAll()) .Scroll("10s") .CreateIndex(i => i.AddMapping<Customer>(m => m.Properties(p => p.String(n => n.Name(name => name.Zipcode).Index(FieldIndexOption.not_analyzed)))))); Source : http://thomasardal.com/elasticsearch-migrations-with-c-and-nest

Query by coordinates takes too long - options to optimize?

时间秒杀一切 提交于 2020-01-11 07:05:15
问题 I have a table where I store events (about 5M at the moment, but there will be more). Each event has two attributes that I care about for this query - location (latitude and longitude pair) and relevancy . My goal is : For a given location bounds (SW / NE latitude/longitude pairs, so 4 floating point numbers) return the top 100 events by relevancy which fall within those bounds. I'm currently using the following query: select * from event where latitude >= :swLatitude and latitude <=

Is an index clustered or unclustered in Oracle?

给你一囗甜甜゛ 提交于 2020-01-11 00:58:29
问题 How can I determine if an Oracle index is clustered or unclustered? I've done select FIELD from TABLE where rownum <100 where FIELD is the field on which is built the index. I have ordered tuples, but the result is wrong because the index is unclustered. 回答1: By default all indexes in Oracle are unclustered. The only clustered indexes in Oracle are the Index-Organized tables (IOT) primary key indexes. You can determine if a table is an IOT by looking at the IOT_TYPE column in the ALL_TABLES