How to get the maximum compression? #64
Replies: 5 comments
-
|
Obviously, using The sample file in this case is 1,434,736,640 bytes. If the directory is already tar'ed, then: Here is an example: and A very slight difference in compression size between and Standard multi-threaded lzma Much lower time and very slight degradation in performance.
The p1 will create the largest possible block to compress instead of splitting a chunk into multiple blocks. Depending on your system memory, you may also try overriding the ZPAQ block size with Be warned though, it takes a long time with Benchmarking is an ongoing problem because what's good for one is not good for all. There are so many ways to slice up how "good" a compression method is. You must find your own "best" way. |
Beta Was this translation helpful? Give feedback.
-
|
Is there a link to the Linux tarball used for this test |
Beta Was this translation helpful? Give feedback.
-
|
kernel.org |
Beta Was this translation helpful? Give feedback.
-
|
If I may know, how does the unlimited flag affect the compression |
Beta Was this translation helpful? Give feedback.
-
|
I've never used it or tested it. IMHO it's totally unnecessary. There's some documentation on it in the original README. With today's modern systems and large reservoirs of RAM, it's not useful. With files larger than available RAM, -U may offer some benefit but at a huge cost of time.
--
Peter Hyman
Sent from mobile. Please excuse brevity
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
How would one get the maximum directory compression regardless of time taken
Regards
Beta Was this translation helpful? Give feedback.
All reactions