Deleting duplicate lines in a file using Java

后端 未结 14 559
予麋鹿
予麋鹿 2020-12-14 01:39

As part of a project I\'m working on, I\'d like to clean up a file I generate of duplicate line entries. These duplicates often won\'t occur near each other, however. I came

相关标签:
14条回答
  • 2020-12-14 01:59

    The Hash Set approach is OK, but you can tweak it to not have to store all the Strings in memory, but a logical pointer to the location in the file so you can go back to read the actual value only in case you need it.

    Another creative approach is to append to each line the number of the line, then sort all the lines, remove the duplicates (ignoring the last token that should be the number), and then sort again the file by the last token and striping it out in the output.

    0 讨论(0)
  • 2020-12-14 02:02

    You could use Set in the Collections library to store unique, seen values as you read the file.

    Set<String> uniqueStrings = new HashSet<String>();
    
    // read your file, looping on newline, putting each line into variable 'thisLine'
    
        uniqueStrings.add(thisLine);
    
    // finish read
    
    for (String uniqueString:uniqueStrings) {
      // do your processing for each unique String
      // i.e. System.out.println(uniqueString);
    }
    
    0 讨论(0)
  • 2020-12-14 02:02

    I have made two assumptions for this efficient solution:

    1. There is a Blob equivalent of line or we can process it as binary
    2. We can save the offset or a pointer to start of each line.

    Based on these assumptions solution is: 1.read a line, save the length in the hashmap as key , so we have lighter hashmap. Save the list as the entry in hashmap for all the lines having that length mentioned in key. Building this hashmap is O(n). While mapping the offsets for each line in the hashmap,compare the line blobs with all existing entries in the list of lines(offsets) for this key length except the entry -1 as offset.if duplicate found remove both lines and save the offset -1 in those places in list.

    So consider the complexity and memory usage:

    Hashmap memory ,space complexity = O(n) where n is number of lines

    Time Complexity - if no duplicates but all equal length lines considering length of each line = m, consider the no of lines =n then that would be , O(n). Since we assume we can compare blob , the m does not matter. That was worst case.

    In other cases we save on comparisons although we will have little extra space required in hashmap.

    Additionally we can use mapreduce on server side to split the set and merge results later. And using length or start of line as the mapper key.

    0 讨论(0)
  • 2020-12-14 02:09

    Okay, most answers are a bit silly and slow since it involves adding lines to some hashset or whatever and then moving it back from that set again. Let me show the most optimal solution in pseudocode:

    Create a hashset for just strings.
    Open the input file.
    Open the output file.
    while not EOF(input)
      Read Line.
      If not(Line in hashSet)
        Add Line to hashset.
        Write Line to output.
      End If.
    End While.
    Free hashset.
    Close input.
    Close output.
    

    Please guys, don't make it more difficult than it needs to be. :-) Don't even bother about sorting, you don't need to.

    0 讨论(0)
  • 2020-12-14 02:13

    Hmm... 40 megs seems small enough that you could build a Set of the lines and then print them all back out. This would be way, way faster than doing O(n2) I/O work.

    It would be something like this (ignoring exceptions):

    public void stripDuplicatesFromFile(String filename) {
        BufferedReader reader = new BufferedReader(new FileReader(filename));
        Set<String> lines = new HashSet<String>(10000); // maybe should be bigger
        String line;
        while ((line = reader.readLine()) != null) {
            lines.add(line);
        }
        reader.close();
        BufferedWriter writer = new BufferedWriter(new FileWriter(filename));
        for (String unique : lines) {
            writer.write(unique);
            writer.newLine();
        }
        writer.close();
    }
    

    If the order is important, you could use a LinkedHashSet instead of a HashSet. Since the elements are stored by reference, the overhead of an extra linked list should be insignificant compared to the actual amount of data.

    Edit: As Workshop Alex pointed out, if you don't mind making a temporary file, you can simply print out the lines as you read them. This allows you to use a simple HashSet instead of LinkedHashSet. But I doubt you'd notice the difference on an I/O bound operation like this one.

    0 讨论(0)
  • 2020-12-14 02:13

    If the order does not matter, the simplest way is shell scripting:

    <infile sort | uniq > outfile
    
    0 讨论(0)
提交回复
热议问题