Shred files - partial shred?
Shred files - partial shred?
I don't know if there would be anyone else using this, but...
I work with tens of HUGE files which are worthless when not whole. They have to be shredded from time to time (at least partly) and it's really annoying to
a) let it run and go away for half a day
b) sit there and stop shredding after like 20-25% of each file to save some time
Would it be possible to add Partial shred option where amount of % of the file would have to be filled. Then only that amount of % of each file would then be shredded (head section and then random parts of file?). It could save most of the precious shredding time!
I work with tens of HUGE files which are worthless when not whole. They have to be shredded from time to time (at least partly) and it's really annoying to
a) let it run and go away for half a day
b) sit there and stop shredding after like 20-25% of each file to save some time
Would it be possible to add Partial shred option where amount of % of the file would have to be filled. Then only that amount of % of each file would then be shredded (head section and then random parts of file?). It could save most of the precious shredding time!
Re: Shred files - partial shred?
Couldn't you just let it work and start another instance of Salamander?
Ελληνικά rulez.
Re: Shred files - partial shred?
Yes, I could, of course. That, however, is an awful example of wasting hard drive time. The drives would then shred for a few hours and every work with them would be horribly slow. But I got the message, this can't affect a lot of users (since not a lot of people use Salamander for shredding at all, I believe), but it would be a nice bonus to Shred Files plugin functionality. The main purpose would be still fulfilled, only in a lot shorter times needed.Ether wrote:Couldn't you just let it work and start another instance of Salamander?
Re: Shred files - partial shred?
Now it makes more sense. I thought you were worried about your working time. Anyway, your suggestion seems reasonable to me - either shred only a region of the file, or rewrite only every nth byte.Slanec wrote:an awful example of wasting hard drive time
In the meantime, I'd suggest trying encryption (e.g. EFS, TrueCrypt, BitLocker, PGP). I don't know your reasons for shredding the files, but in the usual case the result is the same - files cannot be read from a hard drive after you're done working with them.
Ελληνικά rulez.
Re: Shred files - partial shred?
What size are these HUGE files of yours?
(Kind of a rhetorical response, but if the purpose of shredding a file is such that its contents cannot be determined, then if a "partial" shred leaves some contents, then what is the purpose of a "partial shred"
.)
Thinking that there are small utility programs that can be used to create (pre-create) large files - quickly, even zero filled or the like. Might be an option. Particularly if you keep your particular data files on their own partition. Simply delete the files, then in the now freed space, quickly create a large (huge) zero filled file in its place.
Or pre-create a huge zero filled (or other common <within the file> character) file. Zip it up. Zip'd its size would reduce to just about nothing. Delete your data sets, unzip the Zip, which again fills the available space. Delete that file? Once again you're back with free space, but now the slack are all zero's.
I have no knowledge of this particular software, Dummy File Creator, but its functions appear to follow my thoughts.
BIGFILE.ZIP. No instructions, but is asks for the file size in MB & goes on to speedily create a null filled file of the specified size name 0BIGFILE.$$$. (Only thing is that it places the file in the root partition, it seems like. I have an older BIGFILE & it places its file in the current directory.)
(Kind of a rhetorical response, but if the purpose of shredding a file is such that its contents cannot be determined, then if a "partial" shred leaves some contents, then what is the purpose of a "partial shred"

Thinking that there are small utility programs that can be used to create (pre-create) large files - quickly, even zero filled or the like. Might be an option. Particularly if you keep your particular data files on their own partition. Simply delete the files, then in the now freed space, quickly create a large (huge) zero filled file in its place.
Or pre-create a huge zero filled (or other common <within the file> character) file. Zip it up. Zip'd its size would reduce to just about nothing. Delete your data sets, unzip the Zip, which again fills the available space. Delete that file? Once again you're back with free space, but now the slack are all zero's.
I have no knowledge of this particular software, Dummy File Creator, but its functions appear to follow my thoughts.
BIGFILE.ZIP. No instructions, but is asks for the file size in MB & goes on to speedily create a null filled file of the specified size name 0BIGFILE.$$$. (Only thing is that it places the file in the root partition, it seems like. I have an older BIGFILE & it places its file in the current directory.)
WinXP Pro SP3 or Win7 x86 | SS 2.54
Re: Shred files - partial shred?
So I had the occasion to want to create a big file today.
Originally I was using SDelete to "clean free space" on a drive prior to returning it (as it wasn't working properly). A timely proposition on a 1TB HDD. Got me thinking about BIGFILE. So I pull out BIGFILE, & that's just great, creating a null filled file in an instant. But as much as I wanted, I could only get it create a 1 GB maximum sized file. Figured I'd try out Dumy File Creator. That would be fine, though it was much too slow (even on the 1 GB file & hard to image what 1 TB would be like).
So a bit more searching points to Sysinternals.
The hardest & longest part was figuring if I the correct number of zeros. (That should be 639 GB above.) After that, I used SDELETE to (then quickly) clean up the leftovers. For my purposes, that is sufficient to "shred" my data.
Also mentioned was using the (Windows) command: FSUTIL.
Works just as well, just as quickly.
Originally I was using SDelete to "clean free space" on a drive prior to returning it (as it wasn't working properly). A timely proposition on a 1TB HDD. Got me thinking about BIGFILE. So I pull out BIGFILE, & that's just great, creating a null filled file in an instant. But as much as I wanted, I could only get it create a 1 GB maximum sized file. Figured I'd try out Dumy File Creator. That would be fine, though it was much too slow (even on the 1 GB file & hard to image what 1 TB would be like).
So a bit more searching points to Sysinternals.
ContigYou can use the Sysinternals Contig tool. It has a -n switch which creates a new file of a given size. It will create a file almost instantaneous.
To make a new file that is defragmented upon creation, use Contig like this:
Usage: contig [-v] [-n filename length]
Code: Select all
C:\BIN> CONTIG.EXE -v -n S:\DUMY.TXT 639000000000
Also mentioned was using the (Windows) command: FSUTIL.
Code: Select all
C:\BIN> FSUTIL file createnew S:\DUMY.TXT 639000000000
WinXP Pro SP3 or Win7 x86 | SS 2.54
Re: Shred files - partial shred?
Are you aware that this means the data is not actually written onto the disk? (There's no way it could've just rewritten 20 GiB of my free space with zeroes under a second.) Another clue might have been that these utilities need administrative privileges, and it doesn't work instantaneously on FAT volumes. If you want that, I suggest "quick" formatting - it's easier.therube wrote:It will create a file almost instantaneous.
I'm sorry, but if you really want to shred the data, you have to wait for it (or use the hammer & magnet combo).TechNet wrote:Creating large files when performance is an issue. This avoids the time it takes to fill the file with zeroes when the file is created or extended.
Ελληνικά rulez.
Re: Shred files - partial shred?
> Are you aware that this means the data is not actually written onto the disk?
Yes.
But at the same time, it's kind of got me scratching my head?
The files created are null filled. Or at least it sure appears that way?
Haven't put my head around this yet?
So I delete DUMY.TXT, then do an "Undelete".
Undelete does come up with files it says it can recover, though no file is valid (that I looked at, & perhaps with the exception of files < cluster size, 4096 bytes typically). Though the files are not null filled, more randomized like (or I just happen to be hitting binary data?).
Just now (quickly) created a 18 GB DUMY.TXT file.
Then I, ECHO >> DUMY.TXT. This did take a bit of time, but did complete.
So now the file size shows as 18,000,000,013 bytes.
The file is null filled except for the last 13 bytes.
> these utilities need administrative privileges
Correct.
> and it doesn't work instantaneously on FAT volumes
Didn't check that.
> (or use the hammer & magnet combo)
Heh.
< I've got to look at this further at a later time ... >
Yes.
But at the same time, it's kind of got me scratching my head?
The files created are null filled. Or at least it sure appears that way?
Haven't put my head around this yet?
So I delete DUMY.TXT, then do an "Undelete".
Undelete does come up with files it says it can recover, though no file is valid (that I looked at, & perhaps with the exception of files < cluster size, 4096 bytes typically). Though the files are not null filled, more randomized like (or I just happen to be hitting binary data?).
Just now (quickly) created a 18 GB DUMY.TXT file.
Then I, ECHO >> DUMY.TXT. This did take a bit of time, but did complete.
So now the file size shows as 18,000,000,013 bytes.
The file is null filled except for the last 13 bytes.
Code: Select all
0000 0000 0000: $00$ ....
.....
0004 30E2 3400: ECHO is on.$0D0A$
Correct.
> and it doesn't work instantaneously on FAT volumes
Didn't check that.
> (or use the hammer & magnet combo)
Heh.
< I've got to look at this further at a later time ... >
WinXP Pro SP3 or Win7 x86 | SS 2.54
-
- Plugin Developer
- Posts: 707
- Joined: 08 Dec 2005, 14:33
- Location: Prague, Czech Republic
- Contact:
Re: Shred files - partial shred?
I think that Ether wanted to point out that the utility creates a sparse file. It means that it is noted in the file system that "here follows 18GB of zeros" instead of really writing them (and instead of shredding what you would like to shred). When reading back, the reader is made to think the 18GB zeros are really read from the disk. This way, you can have a petabyte of zeros on a 10MB disk without any kind of compression turned on. The use of sparse files it the reason for the lighting speed.therube wrote:The files created are null filled. Or at least it sure appears that way?
Sparse files are supported by more modern file systems. Like it was the old Netware filesystem or later NTFS. Not on FAT, for example, as Ether mentioned.
Re: Shred files - partial shred?
Wasn't familiar with "sparse files".
"Sparse Files"
http://msdn.microsoft.com/en-us/library ... 85%29.aspx
"Sparse file"
http://en.wikipedia.org/wiki/Sparse_file
"Fsutil: sparse"
http://www.microsoft.com/resources/docu ... parse.mspx
"Fsutil: file"
http://www.microsoft.com/resources/docu ... _file.mspx
I'll have to ponder this.
So this is kind of like how a download manager may "reserve" (pre-create) space for a file. It marks the space as allocated (& it is), though it only actually contains pertinent data once it has actually downloaded.
fsutil file, talks about sparse files, thought it wasn't clear that it creates them, though it must, or is it?
If I fsutil file createnew XXX 10000000, or contig -n ABC 10000000, they both create 1 MB files, zero filled.
Though if I fsutil sparse queryflag ABC, it reports that it is not a sparse file.
Though I can set it as sparse with, fsutil sparse setflag ABC. Then queryflag does return, "This file is set as sparse".
I'll have to ponder this - more.
(Oh, & when I mentioned "null" character above, that was wrong, they are zeros.)
"Sparse Files"
http://msdn.microsoft.com/en-us/library ... 85%29.aspx
"Sparse file"
http://en.wikipedia.org/wiki/Sparse_file
"Fsutil: sparse"
http://www.microsoft.com/resources/docu ... parse.mspx
"Fsutil: file"
http://www.microsoft.com/resources/docu ... _file.mspx
I'll have to ponder this.
So this is kind of like how a download manager may "reserve" (pre-create) space for a file. It marks the space as allocated (& it is), though it only actually contains pertinent data once it has actually downloaded.
fsutil file, talks about sparse files, thought it wasn't clear that it creates them, though it must, or is it?
If I fsutil file createnew XXX 10000000, or contig -n ABC 10000000, they both create 1 MB files, zero filled.
Though if I fsutil sparse queryflag ABC, it reports that it is not a sparse file.
Though I can set it as sparse with, fsutil sparse setflag ABC. Then queryflag does return, "This file is set as sparse".
I'll have to ponder this - more.
(Oh, & when I mentioned "null" character above, that was wrong, they are zeros.)
WinXP Pro SP3 or Win7 x86 | SS 2.54
Re: Shred files - partial shred?
That is one way to create large files full of zeroes...Jan Patera wrote:I think that Ether wanted to point out that the utility creates a sparse file.
...however, apparently there's another way - fsutil and contig create an empty file and set its Valid Data Length to the requested size.therube wrote:fsutil file, talks about sparse files, thought it wasn't clear that it creates them, though it must, or is it?
The difference (other than the "sparse flag") is that sparse regions of a file aren't reported as used space, whereas the uninitialized region at the end of an "extended" file is. I could also speculate that these "extended" files actually reserve the physical space, unlike sparse files.
Ελληνικά rulez.