Is overwriting a file multiple times enough to erase its data?

房东的猫 提交于 2020-12-27 07:14:55

问题


In Shredding files in .NET it is recommended to use Eraser or this code here on CodeProject to securely erase a file in .NET.

I was trying to make my own method of doing so, as the code from CodeProject had some problems for me. Here's what I came up with:

        public static void secureDelete(string file, bool deleteFile = true)
    {
        string nfName = "deleted" + rnd.Next(1000000000, 2147483647) + ".del";
        string fName = Path.GetFileName(file);
        System.IO.File.Move(file, file.Replace(fName, nfName));
        file = file.Replace(fName, nfName);
        int overWritten = 0;
        while (overWritten <= 7)
        {
            byte[] data = new byte[1 * 1024 * 1024];
            rnd.NextBytes(data);
            File.WriteAllBytes(file, data);
            overWritten += 1;
        }
        if (deleteFile) { File.Delete(file); }
    }

It seems to work fine. It renames the file randomly and then overwrites it with 1 mb of random data 7 times. However, I was wondering how safe it actually is, and if there was anyway I could make it safer?


回答1:


A file system, especially when accessed through a higher-level API such as the ones found in System.IO is so many levels of abstraction above the actual storage implementation that this approach makes little sense for modern drives.

To be clear: the CodeProject article, which promotes overwriting a file by name multiple times, is absolute nonsense - for SSDs at least. There is no guarantee whatsoever that writing to a file at some path multiple times writes to the same physical location on disk every time.

Of course, opening a file with read-write access and overwriting it from the beginning, conceptually writes to the same "location". But that location is pretty abstract.

See it like this: hard disks, but especially solid state drives, might take a write, such as "set byte N of cluster M to O", and actually write an entire new cluster to an entirely different location on the drive, to prolong the drive's lifetime (as repeated writes to the same memory cells may damage the drive).

From Coding for SSDs – Part 3: Pages, Blocks, and the Flash Translation Layer | Code Capsule:

Pages cannot be overwritten

A NAND-flash page can be written to only if it is in the “free” state. When data is changed, the content of the page is copied into an internal register, the data is updated, and the new version is stored in a “free” page, an operation called “read-modify-write”. The data is not updated in-place, as the “free” page is a different page than the page that originally contained the data. Once the data is persisted to the drive, the original page is marked as being “stale”, and will remain as such until it is erased.

This means that somewhere on the drive, the original data is still readable, namely in the cluster M to which a write was requested. That is, until it is overwritten. The cluster is now marked as "free", but you'll need very low-level access to the disk to access that cluster in order to overwrite it, and I'm not sure that's possible with SSDs.

Even if you would overwrite the entire SSD or hard drive multiple times, chances are that some of your very private data is hidden in a now defunct sector or page on the disk or SSD, because at the moment of overwriting or clearing it the drive determined that location to be defective. A forensics team will be able to read this data (albeit damaged). So, if you have data on a hard drive that can be used against you: toss the drive into a fire.

See also Get file offset on disk/cluster number for some more (links to) information about lower-level file system APIs.

But all of this is to be taken with quite a grain of salt, as all of this is hearsay and I have no actual experience with this level of disk access.



来源:https://stackoverflow.com/questions/38935535/is-overwriting-a-file-multiple-times-enough-to-erase-its-data

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!