mongo 3 duplicates on unique index - dropDups

江枫思渺然 提交于 2019-11-28 18:52:16

Yes dropDupes is now deprecated since version 2.7.5 because it was not possible to predict correctly which document would be deleted in the process.

Typically, you have 2 options :

  1. Use a new collection :

    • Create a new collection,
    • Create the unique index on this new collection,
    • Run a batch to copy all the documents from the old collection to the new one and make sure you ignore duplicated key error during the process.
  2. Deal with it in your own collection manually :

    • make sure you won't insert more duplicated documents in your code,
    • run a batch on your collection to delete the duplicates (and make sure you keep the good one if they are not completely identical),
    • then add the unique index.

For your particular case, I would recommend the first option but with a trick :

  • Create a new collection with unique index,
  • Update your code so you now insert documents in both tables,
  • Run a batch to copy all documents from the old collection to the new one (ignore duplicated key error),
  • rename the new collection to match the old name.
  • re-update your code so you now write only in the "old" collection

As highlighted by @Maxime-Beugnet you can create a batch script to remove duplicates from a collection. I have included my approach below that is relatively fast if the number of duplicates are small in comparison to the collection size. For demonstration purposes this script will de-duplicate the collection created by the following script:

db.numbers.drop()

var counter = 0
while (counter<=100000){
  db.numbers.save({"value":counter})
  db.numbers.save({"value":counter})
  if (counter % 2 ==0){
    db.numbers.save({"value":counter})
  }
  counter = counter + 1;
}

You can remove the duplicates in this collection by writing an aggregate query that returns all records with more than one duplicate.

var cur = db.numbers.aggregate([{ $group: { _id: { value: "$value" }, uniqueIds: { $addToSet: "$_id" }, count: { $sum: 1 } } }, { $match: { count: { $gt: 1 } } }]);

Using the cursor you can then iterate over the duplicate records and implement your own business logic to decide which of the duplicates to remove. In the example below I am simply keeping the first occurrence:

while (cur.hasNext()) {
    var doc = cur.next();
    var index = 1;
    while (index < doc.uniqueIds.length) {
        db.numbers.remove(doc.uniqueIds[index]);
        index = index + 1;
    }
}

After removal of the duplicates you can add an unique index:

db.numbers.createIndex( {"value":1},{unique:true})

pip install mongo_remove_duplicate_indexes

best way will be to create a python script or in any language you prefer,iterate the collection ,create new collection with a unique index set to true with db.collectionname.createIndex({'indexname':1},unique:true),and insert your documents from previous collection to new collection and since key you wanted to be distinct or duplicates removed will not be inserted in ur new collection and u can handle the ecxeption easily with exception handling

check out the package source code for the example

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!