mongodb-indexes

MongoDB uses COLLSCAN when returning just _id

时光毁灭记忆、已成空白 提交于 2021-02-18 07:51:50
问题 I want to return all IDs from a MongoDB collection and I used the code below: db.coll.find({}, { _id: 1}) But MongoDB scans the whole collection instead of reading the information from the default index { _id: 1 } . From the log: { find: "collection", filter: {}, projection: { _id: 1 } } planSummary: COLLSCAN cursorid:30463374118 keysExamined:0 docsExamined:544783 numYields:4286 nreturned:544782 reslen:16777238 locks:{ Global: { acquireCount: { r: 8574 } }, Database: { acquireCount: { r: 4287

MongoDB uses COLLSCAN when returning just _id

帅比萌擦擦* 提交于 2021-02-18 07:51:27
问题 I want to return all IDs from a MongoDB collection and I used the code below: db.coll.find({}, { _id: 1}) But MongoDB scans the whole collection instead of reading the information from the default index { _id: 1 } . From the log: { find: "collection", filter: {}, projection: { _id: 1 } } planSummary: COLLSCAN cursorid:30463374118 keysExamined:0 docsExamined:544783 numYields:4286 nreturned:544782 reslen:16777238 locks:{ Global: { acquireCount: { r: 8574 } }, Database: { acquireCount: { r: 4287

Spring Data MongoDB Slow MongoTemplate.find() Performance

青春壹個敷衍的年華 提交于 2021-02-09 09:57:38
问题 I'm having performance issues when querying ~12,000 user documents, indexed by 1 column, (companyId), no other filter. The whole collection only has ~27000. It took me about 12 seconds to get the ~12000 rows of data... I tried running explain for this query: db.instoreMember.find({companyId:"5b6be3e2096abd567974f924"}).explain(); result follows: { "queryPlanner" : { "plannerVersion" : 1, "namespace" : "production.instoreMember", "indexFilterSet" : false, "parsedQuery" : { "companyId" : { "$eq

Spring Data MongoDB Slow MongoTemplate.find() Performance

China☆狼群 提交于 2021-02-09 09:57:12
问题 I'm having performance issues when querying ~12,000 user documents, indexed by 1 column, (companyId), no other filter. The whole collection only has ~27000. It took me about 12 seconds to get the ~12000 rows of data... I tried running explain for this query: db.instoreMember.find({companyId:"5b6be3e2096abd567974f924"}).explain(); result follows: { "queryPlanner" : { "plannerVersion" : 1, "namespace" : "production.instoreMember", "indexFilterSet" : false, "parsedQuery" : { "companyId" : { "$eq

MongoDB add fields of low cardinality to compound indexes?

你离开我真会死。 提交于 2021-01-28 02:51:19
问题 I have read putting indexes on low cardinality fields is pointless. Would this hold true for a compound index as such: db.perms.createIndex({"owner": 1, "object_type": 1, "target": 1}); With queries as such: db.perms.find({"owner": "me", "object_type": "square"}); db.perms.find({"owner": "me", "object_type": "circle", "target": "you"}); The amount of distinct object_type 's would grow over time (probably no more than 10 or 20 max) but would only start out with about 2 or 3. Similarly would a

MongoDB Indexing: Multiple single-field vs single compound?

℡╲_俬逩灬. 提交于 2021-01-27 13:23:09
问题 I have a collection of geospatial+temporal data with a few additional properties, which I'll be displaying on a map. The collection has a few million documents at this point, and will grow over time. Each document has the following fields: Location: [geojson object] Date: [Date object] ZoomLevel: [int32] EntryType: [ObjectID] I need to be able to rapidly query this collection by any combination of location (generally a geowithin query), Date (generally $gte/$lt), ZoomLevel and EntryType. What

What is the correct way to Index in MongoDB when big combination of fields exist

柔情痞子 提交于 2020-06-27 06:06:10
问题 Considering I have search pannel that inculude multiple options like in the picture below: I'm working with mongo and create compound index on 3-4 properties with specific order. But when i run a different combinations of searches i see every time different order in execution plan (explain()). Sometime i see it on Collection scan (bad) , and sometime it fit right to the index (IXSCAN). The selective fields that should handle by mongo indexes are: (brand,Types,Status,Warehouse,Carries ,Search

Using an Index with Mongo's $first Group Operator

十年热恋 提交于 2020-05-16 22:00:20
问题 Per Mongo's latest $group documentation, there is a special optimization for $first: Optimization to Return the First Document of Each Group If a pipeline sorts and groups by the same field and the $group stage only uses the $first accumulator operator, consider adding an index on the grouped field which matches the sort order. In some cases, the $group stage can use the index to quickly find the first document of each group. It makes sense, since only the first entry in an ordered index

MongoDB takes different time for with and without hint

梦想的初衷 提交于 2020-01-25 07:06:48
问题 I know, I know, hint forces MongoDB to use a specific index. I have two queries: db.mycol.find({sourceId:ObjectId("596bac5a6f473e1a042bFFFF"),myFlag:false}).count() db.mycol.find({sourceId:ObjectId("596bac5a6f473e1a042bFFFF"),myFlag:false}).hint("sourceId_-1_myFlag_-1").count() Both of these commands take different time: Without hint: 4-6 seconds With hint : 0-1 second However, both of the commands are using the same index: { "sourceId" : -1, "myFlag" : -1 } When I execute db.mycol.aggregate(

Why Mongo query for null filters in FETCH after performing IXSCAN

Deadly 提交于 2020-01-15 11:53:37
问题 According to Mongo Documentation, The { item : null } query matches documents that either contain the item field whose value is null or that do not contain the item field. I can't find documentation for this, but as far as I can tell, both cases (value is null or field is missing) are stored in the index as null . So if I do db.orders.createIndex({item: 1}) and then db.orders.find({item: null}) , I would expect an IXSCAN to find all documents that either contain the item field whose value is