Before getting rid of a server, I needed a quick and efficient way to wipe some of the data that we had on the disk. Fortunately, this functionality is built in into Linux1 so a quick search on Google provides the necessary command:

1
shred -v -n 1 -z -u /path/to/your/file

It is possible to increase the number of passes (i.e. the number of time data is written over your original data) simply by specifying it after the -n options.

However, as mentioned in the original thread, this will only be efficient for files over a certain size. In case you would like to shred small files, a better approach is the following:

1
2
3
shred -v -n 1 /path/to/your/file #overwriting with random data
sync #forcing a sync of the buffers to the disk
shred -v -n 0 -z -u /path/to/your/file #overwriting with zeroes and remove the file

The problem with these commands is that they only target individual files… not really convenient to shred a full directory and its underlying content. Fortunately again, it is easily possible to find all files within a folder and pass them to the command above2. In order to do that, simply file all files within a given directory (this will work recursively by default) and pipe them to the shred command:

1
find folder_name -type f | xargs shred -v -n 1 -z -u

or to process mainly small files:

1
2
3
find folder_name -type f | xargs shred -v -n 1
sync
find folder_name -type f | xargs shred -v -n 0 -z -u