This blog post picks up from Part 1.
BACKGROUND
Previously we saw that the PNG encoder in .NET achieved on average a 1.3x increase in the size of a PNG file compared to the original Hypersnap PNG version. It’s not entirely clear though where the Hypersnap itself stands. To get a better sense of it, we’ll investigate using the PNGOUT tool – and assume that PNGOUT represents the “best case”,
I ran PNGOUT on the same collection of 11206 input files using this script:
PS> get-childitem . -include *.png -recurse | foreach ($_) { pngout "$_"}
First realization at the maximum optimization level (which is the PNGOUT default) each PNG file takes a very long time to analyze and encode. It takes about 1 MINUTE per PNG. So instead of converting all 11206 I let the script run overnight and only compared the files that were converted. So this accounts for 1450 files. So the sample set is smaller, but I think it contains a reasonable variety of screen captures.
HYPERSNAP ENCODER VS. PNGOUT ENCODER
Quick summary: ALL the files were smaller after PNGOUT was finished. On average PNGOUT encoded files were 85.7% of the Hypersnap-encoded files. The bar chart below shows the number of files that fell within a range of compression,. As you can see most the compression results do cluster in the 85% - 90% band.

OBSERVATIONS
When taken together with the data from the previous blog post we see the following
- PNGOUT can make a decent difference compared to Hypersnap – they are about 14% smaller after PNGOUT encoding
- On-demand application of the extreme PNGOUT compression on every save doesn’t seem practical – with an an encoding time of one minute per file - it’s not a reasonable thing to do on every save.
- .NET Framework PNG files are about 1.25x bigger than Hypersnap-encoded version.
- .NET Framework PNG files are about 1.47x bigger than PNGOUT-encoded version
- Because we see that PNGOUT is so slow it seems clear that that Hypersnap PNG encoding is a very reasonable midway point. Whatever Hypersnap is doing is fast (there’ no evident slowdown when saving PNGs) and does provides a significant improvement over .NET
- Keep in mind that if you are transmitting a lot of PNG files over a network – the encoding will incur a size cost. If you are paying for bandwidth this might even be significant.
RAW DATA
USEFUL SOURCE CODE
The code below shows how to find how big a .NET Framework-encoded PNG will be – given a starting image. Instead of writing to the disk, an memory stream is used – an re-used. This keeps the memory usage small and avoid the I/O overhead of writing to the disk.
public static long GetDotNetPNGSize(string filename, System.IO.MemoryStream memstream) { // resets the memory stream // encodes as PNG // and returns the number of bytes in the stream using (var bmp0 = new System.Drawing.Bitmap(filename)) { memstream.SetLength(0); bmp0.Save(memstream, System.Drawing.Imaging.ImageFormat.Png); long memfilesize = memstream.Length; return memfilesize; } }