This blog post picks up from Part 1.
Previously we saw that the PNG encoder in .NET achieved on average a 1.3x increase in the size of a PNG file compared to the original Hypersnap PNG version. It’s not entirely clear though where the Hypersnap itself stands. To get a better sense of it, we’ll investigate using the PNGOUT tool – and assume that PNGOUT represents the “best case”,
I ran PNGOUT on the same collection of 11206 input files using this script:
PS> get-childitem . -include *.png -recurse | foreach ($_) { pngout "$_"}
First realization at the maximum optimization level (which is the PNGOUT default) each PNG file takes a very long time to analyze and encode. It takes about 1 MINUTE per PNG. So instead of converting all 11206 I let the script run overnight and only compared the files that were converted. So this accounts for 1450 files. So the sample set is smaller, but I think it contains a reasonable variety of screen captures.
Quick summary: ALL the files were smaller after PNGOUT was finished. On average PNGOUT encoded files were 85.7% of the Hypersnap-encoded files. The bar chart below shows the number of files that fell within a range of compression,. As you can see most the compression results do cluster in the 85% - 90% band.

When taken together with the data from the previous blog post we see the following
The code below shows how to find how big a .NET Framework-encoded PNG will be – given a starting image. Instead of writing to the disk, an memory stream is used – an re-used. This keeps the memory usage small and avoid the I/O overhead of writing to the disk.
public static long GetDotNetPNGSize(string filename, System.IO.MemoryStream memstream) { // resets the memory stream // encodes as PNG // and returns the number of bytes in the stream using (var bmp0 = new System.Drawing.Bitmap(filename)) { memstream.SetLength(0); bmp0.Save(memstream, System.Drawing.Imaging.ImageFormat.Png); long memfilesize = memstream.Length; return memfilesize; } }