mongo 3 duplicates on unique index - dropDups

前端 未结 3 1012
刺人心
刺人心 2020-12-13 10:32

In the documentation for mongoDB it says: \"Changed in version 3.0: The dropDups option is no longer available.\"

Is there anything I can do (other than downgrading)

相关标签:
3条回答
  • 2020-12-13 10:53

    Yes dropDupes is now deprecated since version 2.7.5 because it was not possible to predict correctly which document would be deleted in the process.

    Typically, you have 2 options :

    1. Use a new collection :

      • Create a new collection,
      • Create the unique index on this new collection,
      • Run a batch to copy all the documents from the old collection to the new one and make sure you ignore duplicated key error during the process.
    2. Deal with it in your own collection manually :

      • make sure you won't insert more duplicated documents in your code,
      • run a batch on your collection to delete the duplicates (and make sure you keep the good one if they are not completely identical),
      • then add the unique index.

    For your particular case, I would recommend the first option but with a trick :

    • Create a new collection with unique index,
    • Update your code so you now insert documents in both tables,
    • Run a batch to copy all documents from the old collection to the new one (ignore duplicated key error),
    • rename the new collection to match the old name.
    • re-update your code so you now write only in the "old" collection
    0 讨论(0)
  • 2020-12-13 10:55

    As highlighted by @Maxime-Beugnet you can create a batch script to remove duplicates from a collection. I have included my approach below that is relatively fast if the number of duplicates are small in comparison to the collection size. For demonstration purposes this script will de-duplicate the collection created by the following script:

    db.numbers.drop()
    
    var counter = 0
    while (counter<=100000){
      db.numbers.save({"value":counter})
      db.numbers.save({"value":counter})
      if (counter % 2 ==0){
        db.numbers.save({"value":counter})
      }
      counter = counter + 1;
    }
    

    You can remove the duplicates in this collection by writing an aggregate query that returns all records with more than one duplicate.

    var cur = db.numbers.aggregate([{ $group: { _id: { value: "$value" }, uniqueIds: { $addToSet: "$_id" }, count: { $sum: 1 } } }, { $match: { count: { $gt: 1 } } }]);
    

    Using the cursor you can then iterate over the duplicate records and implement your own business logic to decide which of the duplicates to remove. In the example below I am simply keeping the first occurrence:

    while (cur.hasNext()) {
        var doc = cur.next();
        var index = 1;
        while (index < doc.uniqueIds.length) {
            db.numbers.remove(doc.uniqueIds[index]);
            index = index + 1;
        }
    }
    

    After removal of the duplicates you can add an unique index:

    db.numbers.createIndex( {"value":1},{unique:true})
    
    0 讨论(0)
  • 2020-12-13 10:57

    pip install mongo_remove_duplicate_indexes

    best way will be to create a python script or in any language you prefer,iterate the collection ,create new collection with a unique index set to true with db.collectionname.createIndex({'indexname':1},unique:true),and insert your documents from previous collection to new collection and since key you wanted to be distinct or duplicates removed will not be inserted in ur new collection and u can handle the ecxeption easily with exception handling

    check out the package source code for the example

    0 讨论(0)
提交回复
热议问题