|
For my own twisted interest, I ran a few unix file compression methods on the ["Declaration of Independence" http://earlyamerica.com/earlyamerica/freedom/doi/text.html] (html version). |{ border=0 }{ align=right bgcolor=gray style=color:white }{ width=110 } __Method__ |{ width=100 } __Bytes__ | |{ align=right } Uncompressed: | 13641 | |{ align=right } gzip: | 5508 | To really stress test the different unix compression programs, I concatenated 25,000 copies: For my own twisted interest, I ran a few unix file compression methods on 25,000 concatenated copies of the ["Declaration of Independence" http://earlyamerica.com/earlyamerica/freedom/doi/text.html] (html version, 13,641 bytes). |{ border=0 }{ align=right bgcolor=gray style=color:white }{ width=110 } __Method__ |{ width=100 } __Bytes__ |{ width=100 } __Time__ |{ width=100 } __Ratio__ | |{ align=right } Uncompressed: | 341,025,000 | - | 0.000% | |{ align=right } compress: | 90,666,281 | 3m39.374s | 73.414% | |{ align=right } bzip2: | ? | > 1 hour | ? | |{ align=right } zip: | 2,378,380 | 1m51.983s | 99.303% | |{ align=right } gzip: | 2,378,313 | 1m43.769s | 99.303% | ---- __How to create a file containing 25,000 copies of the ["Declaration of Independence" http://earlyamerica.com/earlyamerica/freedom/doi/text.html]__ <pre> curl !http://earlyamerica.com/earlyamerica/freedom/doi/text.html > f cat f f f f f f f f f f > fd cat fd fd fd fd fd fd fd fd fd fd > fh cat fh fh fh fh fh fh fh fh fh fh > fk cat fk fk fk fk fk fk fk fk fk fk > f10k cat f10k f10k fk fk fk fk fk > doi25k rm f fd fh fk f10k </pre> There's probably a nice easy way to do it with perl or some such, but this was easy enough. |
|