Find unique lines
How can I find the unique lines and remove all duplicates from a file? My input file is 1 1 2 3 5 5 7 7 I would like the result to be: 2 3 sort file | uniq will not do the job. Will show all values 1 time uniq has the option you need: -u, --unique only print unique lines $ cat file.txt 1 1 2 3 5 5 7 7 $ uniq -u file.txt 2 3 kasavbere Use as follows: sort < filea | uniq > fileb uniq -u has been driving me crazy because it did not work. So instead of that, if you have python (most Linux distros and servers already have it): Assuming you have the data file in notUnique.txt #Python #Assuming file