You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we approximate the number of passes that use random data to be 3 + x/10 (pending #7822), where x is the total number of passes. However, GNU shred output differs:
The GNU output is not random. When comparing marginal differences (i.e. each term minus the one before it), there appears to almost be a pattern, but I'm not sure that it holds for larger numbers. I have no idea where to go with this since we can't consult the GNU source code. The man page doesn't describe the algorithm in use here, and I can't find another shred spec.
The text was updated successfully, but these errors were encountered:
Looks like the number of random passes in GNU shred is deterministic and repeats:
And it smooths out to juuuuust a bit more than x/10, i.e. 1022 random passes for -n 10000:
I think it's a good idea to keep it simple with our x / 10 approach. (Btw, you implemented it as (x / 10).max(3), not x / 10 + 3, which makes sense to me.)
Because of that, I'm about to open a PR to declare this our custom "extension", which would close this issue.
Currently, we approximate the number of passes that use random data to be
3 + x/10
(pending #7822), where x is the total number of passes. However, GNU shred output differs:The GNU output is not random. When comparing marginal differences (i.e. each term minus the one before it), there appears to almost be a pattern, but I'm not sure that it holds for larger numbers. I have no idea where to go with this since we can't consult the GNU source code. The man page doesn't describe the algorithm in use here, and I can't find another
shred
spec.The text was updated successfully, but these errors were encountered: