mongodb-query

Mongodb geolocation boundaries search/query

痴心易碎 提交于 2020-01-04 15:27:29
问题 I have a documents contains list of location "boxes" (square area). Each box is represented by 2 points (bottom-left or south-west, top-right or north-east). Document, for example: { locations: [ [[bottom,left],[top,right]], [[bottom,left],[top,right]], [[bottom,left],[top,right]] ] } I'm using 2d index for those boundaries points. My input is a specific location point [x,y] and I want to fetch all documents that have at list one box that this point is located in it. Is there any geospatial

Mongodb geolocation boundaries search/query

会有一股神秘感。 提交于 2020-01-04 15:27:28
问题 I have a documents contains list of location "boxes" (square area). Each box is represented by 2 points (bottom-left or south-west, top-right or north-east). Document, for example: { locations: [ [[bottom,left],[top,right]], [[bottom,left],[top,right]], [[bottom,left],[top,right]] ] } I'm using 2d index for those boundaries points. My input is a specific location point [x,y] and I want to fetch all documents that have at list one box that this point is located in it. Is there any geospatial

Serializing MongoDB find() return into non-anonymous JSON array, using PyMongo

∥☆過路亽.° 提交于 2020-01-04 14:29:09
问题 My Python code queries a MongoDB and gets back an array of the following objects: { u'attribute': u'value', u'_id': ObjectId('534776c66e5987041f6154bd') } What I want to achieve, is to return the following JSON: { 'mycollectionkey' : [ { 'attribute':'value', '_id': ObjectId('534776c66e5987041f6154bd') }, ...and so on. ] } However, when I do: docs = mongodb.find(...query...) docs_json = bson.json_util.dumps(docs) return flask.jsonify(success=True,mycollectionkey=docs_json) I get: {

Mongoose aggregate Lookup - How to filter by specific id

泪湿孤枕 提交于 2020-01-04 12:50:15
问题 I am trying to make a aggregate pipeline - $lookup to receive from another collection only items that are not equal to specific _id for example : ClinicsCollection : {_id:1,name:'some name1'} {_id:2,name:'some name2'} {_id:3,name:'some name3'} BusinessCollection : {_id:1,name:"some business name",clinics:[1,2,3]} My aggregate pipeline query : db.business.aggregate([ {$match: {_id: mongoose.Types.ObjectId(businessId)}}, {$lookup: {from: "ClinicsCollection", localField: "clinics", foreignField:

Mongoose aggregate Lookup - How to filter by specific id

不问归期 提交于 2020-01-04 12:50:01
问题 I am trying to make a aggregate pipeline - $lookup to receive from another collection only items that are not equal to specific _id for example : ClinicsCollection : {_id:1,name:'some name1'} {_id:2,name:'some name2'} {_id:3,name:'some name3'} BusinessCollection : {_id:1,name:"some business name",clinics:[1,2,3]} My aggregate pipeline query : db.business.aggregate([ {$match: {_id: mongoose.Types.ObjectId(businessId)}}, {$lookup: {from: "ClinicsCollection", localField: "clinics", foreignField:

How to save 1 million records to mongodb asyncronously?

只谈情不闲聊 提交于 2020-01-04 10:17:53
问题 I want to save 1 million records to mongodb using javascript like this: for (var i = 0; i<10000000; i++) { model = buildModel(i); db.save(model, function(err, done) { console.log('cool'); }); } I tried it, it saved ~160 records, then hang for 2 minutes, then exited. Why? 回答1: It blew up because you are not waiting for an asynchronous call to complete before moving on to the next iteration. What this means is that you are building a "stack" of unresolved operations until this causes a problem.

MongoDB aggregate pipeline slow after first match step

♀尐吖头ヾ 提交于 2020-01-04 06:04:13
问题 I have a MongoDB aggregate pipeline that contains a number of steps (match on indexed fields, add fields, sort, collapse, sort again, page, project results.) If I comment out all of the steps except the first match step, the query executes super fast (.075 seconds), as it's leveraging the proper index. However, if I then try to perform ANY follow up step, even something as simple as getting the results count, the query then starts taking 27 seconds!!! Here is the query: (Don't get too caught

MongoDB aggregate pipeline slow after first match step

我的梦境 提交于 2020-01-04 06:03:15
问题 I have a MongoDB aggregate pipeline that contains a number of steps (match on indexed fields, add fields, sort, collapse, sort again, page, project results.) If I comment out all of the steps except the first match step, the query executes super fast (.075 seconds), as it's leveraging the proper index. However, if I then try to perform ANY follow up step, even something as simple as getting the results count, the query then starts taking 27 seconds!!! Here is the query: (Don't get too caught

Mongo query not giving exact results for aggregate function

三世轮回 提交于 2020-01-04 05:28:25
问题 My mongo database contains a collection 'Shops' and the data is like below: { "_id" : ObjectId("XXXX1b83d2b227XXXX"), "ShopId" : 435, "products" : [ { "productId" : "1234", "productName" : "non veg", "productCategory" : "meals", "mrp" : "38", }, { "productId" : "5234", "productName" : "non veg", "productCategory" : "meals", "mrp" : "38", }, { "productId" : "6234", "productName" : "apple", "productCategory" : "juice", "mrp" : "38", }, { "productId" : "7234", "productName" : "non veg",

String field value length in array in mongoDB

那年仲夏 提交于 2020-01-04 04:05:30
问题 How can I get the length of characters from a field that is nested in another field? and it is in an array. eg: { "_id" : ObjectId("687e1db"), "content" : { "ods" : "1102223000241", "startDate" : ISODate("2017-05-11T12:00:00Z"), "classes" : [ { "driveNumber" : "9999078900007091", "number" : "00107605829357", "sId" : "0000000005009593" } ], "user" : "SoftLogic", }, "level" : 2 } and I want to get a sample in the content.classes.number , where there are more than 16 characters in the field.