PhotoDemon 6.0 beta is live

Chroma key (green screen) is one of many new tools in this release.
Chroma key (green screen) is one of many new tools in this release.

Download

Remember: if you’re an advanced user, you never have to wait for a beta release. You can always download PhotoDemon’s latest development release from its GitHub page (source code), or from this nightly build permalink (program only).

PhotoDemon is funded by donations from users like you.
Please consider a small donation to fund development and to help me support my family.
Even $1.00 helps. Thank you!

Overview

It’s taken nearly six months, but PhotoDemon 6.0 is finally ready for release. I’ve already talked about some of the great features this release includes, like powerful selection tools, metadata (EXIF) support, Curves and other new tools, so I’d recommend glancing through the linked article if you’re curious.

Since that article, a number of other features have been added or improved:

  • All tools now support save/load presets, reset to default, randomize, and automatic save/load of last-used settings. These items are all accessible from a new “command bar” at the bottom of each tool dialog.
  • From left-to-right, the command bar includes buttons for: reset, randomize, saved presets, and save current settings as preset.  Last-used settings are automatically saved and loaded by the dialog.
    From left-to-right, the command bar includes buttons for: reset, randomize, saved presets, and save current settings as preset. Last-used settings are automatically saved and loaded by the dialog.
  • Three new blur tools: motion, radial, and zoom blur. These tools outperform similar tools in GIMP and Paint.NET.
  • PhotoDemon's new radial blur tool is 4x faster than Paint.NET's, and 30x faster then GIMP's - and at high angles, it produces significantly better output.
    PhotoDemon’s new radial blur tool is 4x faster than Paint.NET’s, and 30x faster then GIMP’s – and at high angles, it produces significantly better output.
  • Much faster Gaussian and Box blur tools (20x improvement!)
  • The updated Gaussian Blur tool now provides quality settings for improved performance.  For most photos, the difference between "good" and "best" will be indistinguishable, but "good" will be some 20x faster.
    The updated Gaussian Blur tool now provides quality settings for improved performance. For most photos, the difference between “good” and “best” will be indistinguishable, but “good” will be some 20x faster.
  • A new chroma key (“green screen”) tool with performance comparable to professional tools, including full support for edge blending. Find it in the Image -> Transparency -> Make color transparent menu.
  • Before color removal; image courtesy http://dimula73.blogspot.com/2013/03/new-user-interface-for-krita-color-to.html
    Before color removal; image courtesy http://dimula73.blogspot.com/2013/03/new-user-interface-for-krita-color-to.html
    After color removal.  Note that the tool creates a 32bpp image, which you can then composite using any photo editing software.
    After color removal. Note that the tool creates a 32bpp image, which you can then composite using any photo editing software.
  • A new Language Editor makes contributing new translations fast and easy.
  • The new Language Editor makes it easier than ever to get involved in translation.  Please contact me if you can help!  (You will receive full credit for your work.)
    The new Language Editor makes it easier than ever to get involved in translation. Please contact me if you can help! (You will receive full credit for your work.)
  • New variable-strength Sharpen tool
  • Previously, PhotoDemon only provided set "Sharpen" and "Sharpen More" functions.  The new tool allows for floating-point adjustments, which allow for much more nuanced fixes.  (Unsharp Masking is still available too, obviously!)
    Previously, PhotoDemon only provided set “Sharpen” and “Sharpen More” functions. The new tool allows for floating-point adjustments, which allow for much more nuanced fixes. (Unsharp Masking is still available too, obviously!)
  • New Oil Painting tool
  • Same photo as the screenshot at the top of this page, but oil-ified.
    Same photo as the screenshot at the top of this page, but oil-ified.
  • Minor improvements to many tools, including polar coordinate conversion, perspective correction, wave distort, ripple distort, figured glass, tile image, posterize, rotate, custom filters, histogram.
  • The perspective tool now supports both forward and reverse transforms.  Reverse transforms allow you to simply trace a crooked object, and have it automatically straightened by the program.
    The perspective tool now supports both forward and reverse transforms. Reverse transforms allow you to simply trace a crooked object, and have it automatically straightened by the program.
    The histogram offers new render options, which can be helpful for identifying areas of channel overlap.
    The histogram offers new render options, which can be helpful for identifying areas of channel overlap.
  • Any tool with a “color” option now allows you to pick a color directly from the image by clicking the preview.
  • Much better support for high-DPI screens, including tablets.
  • Faster viewport rendering for 32bpp images.

Again, these new features are only a fraction of what 6.0 includes. Please check out the 6.0 preview article for news on all the other new tools and improvements.

Acknowledgments

This 6.0 release represents six months of hard work from a variety of contributors. While I am very grateful to all of PhotoDemon’s talented contributors, a few deserve special mention. Thank you to:

  • Audioglider for contributing three new tools: Channel Mixer, Vibrance, and Exposure. Audioglider also reported a number of issues, and motivated me to implement preset support for every PD tool.
  • Frank Donckers for again providing the German, French, and Dutch translations, and for contributing many pieces of code to the new Language Editor, including the Google Translate interface. Amazing stuff.
  • GioRock for the Italian translation, and for detailed testing of many small translation items. It takes a ton of work to get all of PD’s text translating properly, and GioRock debugged many items for me, which benefits users of every language.
  • Kroc Camen for a new IDE-safe mouse interface class, derived from his own open-source VB project. Kroc also reviews many of PD’s individual commits, where he catches many small items I overlook.
  • Robert Rayment for helping me profile and optimize a number of PD’s more taxing functions, and for many suggestions on tweaks and improvements. Many of the performance improvements available in this new version are a result of Robert’s help. Please check out his own VB image editor if you can.

Known bugs

  • EXIF data is not maintained with certain combinations of preferences (delay loading EXIF + export full data when saving). This is caused by a metadata caching issue, and will be fixed by release. Fixed!
  • ExifTool plugin is slightly out of date. It will be updated to its latest version upon 6.0’s release. Fixed!
  • Metadata menus sometimes become disabled even when metadata is available. This will be fixed by release. Fixed!
  • OK and Cancel buttons are not currently translated. This will be fixed by release. Fixed!
  • Some hotkeys don’t fire unless the main form is first clicked. This is a known problem with VB, and will hopefully be fixed by release. Fixed!
  • Master language file is missing a few minor text entries. This will be fixed by release.

The beta version was released before these small items were fixed, so it still contains these bugs. Developers can download updated source code, with these fixes, from GitHub.

Official release timeline

Barring any major bugs, the official 6.0 release should happen within several weeks. Feature-wise, it will be identical to this beta release. The only changes will be minor bug fixes and performance improvements. Automatic update notifications for existing PhotoDemon installs will also go live at that point.

Blur Filter performance: PhotoDemon vs GIMP vs Paint.NET

(Note before I begin: the PhotoDemon 6.0 beta should be live by the end of this week. Sorry it took so long to prepare!)

See what kind of fun charts we get to discuss?  And here I thought the days of 17-minute photo editing actions died with the Pentium III...
See what kind of fun charts we get to discuss? And here I thought the days of 17-minute photo editing actions died with the Pentium III…

The latest nightly build of PhotoDemon (download it here) includes a bunch of new and improved blur filters. Blur filters are among the most computationally demanding filters in a photo editor, because for each pixel in an image, a bunch of other pixels must also be examined in order to calculate the blur. (Blurs generally work by averaging together groups of pixels. Motion blur averages pixels in a line, radial blur averages pixels in an arc, and normal blur averages pixels in a box or circle shape.)

As a simple example, consider a basic blur with a 200 pixel radius, applied to a 10 megapixel digital photo. For each pixel in the photo (all ten million of them), an area of 200 pixels in each direction must be averaged together. Using a simple box blur, that means a box of 200 pixels left, right, up and down must be tallied (for a net area of 400 * 400, or 160,000 pixel comparisons) in order to calculate the blur. Thus, such an algorithm would require:

10,000,000 pixels * 160,000 calculations per pixel = 1.6 trillion total calculations

Even on a modern processor, that’s an enormous undertaking. Fortunately, mathematicians and coders have developed many clever ways to optimize blur functions. Many of these optimizations appear in the newest PhotoDemon build, so I thought it would be fun to speed-test four of PhotoDemon’s blur tools against two other free photo editors: GIMP and Paint.NET. The results were surprising enough that I thought them worth sharing.

A brief overview of each photo editor:

  • PhotoDemon: open-source, written in VB6, nightly build 893 (6.0 beta)
  • GIMP: open-source, written primarily in C, v2.8.6
  • Paint.NET: closed-source, written primarily in C# (and the .Net framework, per the name), v3.5.11

As benchmarking goes, this was very informal. PhotoDemon reports timing automatically in nightly builds, but for GIMP and Paint.NET I had to resort to using a stopwatch. Normally this is a terrible idea, but the algorithms involved take a very long time to run, so a stopwatch was sufficient for broad timing. (10ths of a second don’t matter much when an algorithm takes twenty minutes to finish…)

All tests were done on Windows 7 (64-bit), on my Core i5 650 (3.2ghz) desktop PC with 8gb of RAM. My PC was middle-of-the-road when I bought it back in 2010, so I’d consider reasonably representative of an “average” PC. All the tools in question appear to be heavily CPU-bound anyway, so it’s doubtful newer processors or more cores would make a meaningful difference.

The test photo I used was a 10 megapixel photo, 3872×2592 specifically, in JPEG format:

10 megapixel test photo

With the exception of some very long timings (10+ minutes), all timings were checked twice to make sure results were representative. Very long ones were only checked once due to the wait involved, though I did initiate a second attempt just to make sure my PC wasn’t acting up. (It wasn’t.)

Here are the timing results for four separate blur types, with some notes on my implementation, and what I know or can potentially infer about GIMP and/or Paint.NET’s implementations.

(Due to the large size of the images involved, I saw no reason to upload the output images of each test. Anyone interested can easily reproduce this test on their own PC with images of their choosing.)

Gaussian Blur

Two notes - PhotoDemon used the "good" quality setting, which is a Gaussian estimation using a modified 3x box blur, and GIMP used the IIR method.
Two notes – PhotoDemon used the “good” quality setting, which is a Gaussian estimation using a modified 3x box blur, and GIMP used the IIR method.

Gaussian Blur provides an excellent starting point. Gaussian blur works by averaging a square chunk of pixels, and giving pixels close to the center more weight than pixels far away. It is the most common type of blur tool in photo editing software, probably because its results are aesthetically pleasing, and it is an easy blur function to optimize.

Instead of a naive approach, which would involve the 1.6 trillion calculations mentioned above, most photo editors implement Gaussian Blur using a separable implementation, which cuts the calculations to a much more pleasant 8 billion calculations. Unfortunately, 8 billion calculations is still a lot. (PhotoDemon’s “best quality” option on its Gaussian tool applies a pure Gaussian using separable kernels. On large images, it’s slow. Very slow.)

An even faster approach takes advantage of a neat mathematical relationship between box filters and Gaussian filters: if you keep applying a box filter to a set of data, the result will eventually approach a Gaussian distribution. (Excellent charts available here, courtesy of Nghia Ho.) The Central Limit Theorem shows that repeating a box blur three times results in a function that’s ~97% identical to a true Gaussian.

PhotoDemon uses this as the basis for its three quality settings for Gaussian blur (good, better, and best). Good is a 3x box blur approximation, Better is a 5x, and Best is a true Gaussian. For the chart above, I used the “good” setting because it is by far the fastest. (Note that there’s a bit more to it than just repeating a box blur – how you calculate the box blur size matters; I use a variation of the W3 recommendation available here.)

Take-home message: GIMP’s IIR implementation is excellent – very fast, and it produces a true Gaussian, no estimations. PhotoDemon is surprisingly competitive for a single-threaded VB6 app. Paint.NET’s Gaussian is quite poor both in quality and final result. Its resulting blur is muddier than a true Gaussian, and much slower than you’d expect for a box-blur approximation… so I honestly have no idea how they’ve implemented it.

Motion Blur

PhotoDemon used "Quality" mode instead of "Speed", meaning bilinear interpolation was applied to the rotated image.  No extra options are available for this tool in GIMP or Paint.NET.
PhotoDemon used “Quality” mode instead of “Speed”, meaning bilinear interpolation was applied to the rotated image. Also, “blur symmetrically” was checked. No extra options are available for this tool in GIMP or Paint.NET.

Motion blur is a bit more problematic than Gaussian blur, because it doesn’t work in a square pattern. A naive approach would have you use something like Bresenham’s algorithm on each pixel, tracing a line at the specified angle and averaging interpolated values as you go.

A much better approach is to simply rotate the image by the requested angle, apply a (very fast) horizontal blur, then rotate the image back into place. If you use a fast rotation algorithm (like the famous 3-shear technique), this can make motion blur very quick.

My PhotoDemon implementation does not use the fast 3-shear technique; it uses a naive, geometric rotation (reverse-mapped) with bilinear interpolation. I expected this to make it quite a bit slower than comparable tools in GIMP and Paint.NET, but I was surprised to discover that both software packages are… well, pretty damn terrible.

Based on a brief perusal of GIMP’s source code, they appear to use the naive Bresenham approach, which explains why it’s so slow.

Once again, Paint.NET’s execution time makes no sense to me. For a software package that claims: “extensive work has gone into making Paint.NET the fastest image editor available“, methinks they need a bit more “extensive work” on this particular tool…

Radial Blur

As before, PhotoDemon uses the "quality" setting for bilinear interpolation.  Paint.NET was applied at quality setting 2 out of 5, the default setting.  (This results in a noticeably lower-quality image than PhotoDemon or GIMP.)  GIMP does not provide any additional options for this tool.
As before, PhotoDemon uses the “quality” setting for bilinear interpolation. Paint.NET was applied at quality setting 2 out of 5, the default setting. GIMP does not provide any additional options for this tool.

And so we move to Radial Blur, where we find a surprising role reversal: Paint.NET gives a much better showing here, while GIMP turns in the worst performance yet. Again, a brief look at GIMP’s source code for this function shows a questionable nested-loop approach to the problem. Tracing an arc-like path for each pixel is a bad idea, and while bilinear interpolation is used to improve the output quality – same as PhotoDemon – the time required makes this tool pretty much unusable.

PhotoDemon’s implementation is nothing particularly special, which makes its relative performance so surprising. I use a well-known trick where I convert the image to polar coordinates, apply a horizontal blur, then convert the image back to Cartesian coordinates. A small amount of image quality is lost by the two coordinate conversions, but because we are blurring the image anyway, this doesn’t matter much. That said, for small angles (< 5 degrees), both GIMP and Paint.NET produce better-looking output. At larger radii, however, PhotoDemon's is much better. Both GIMP and Paint.NET produce Moire patterns, presumably from sampling at discrete intervals, while PhotoDemon’s output is clean and smooth. This can probably be fixed in Paint.NET by using a higher quality setting, but quality setting 2/5 was already slow enough!

The top-left corner of the image after PhotoDemon's radial blur.  Buttery smooth, and accurate edge handling.
The top-left corner of the image after PhotoDemon’s radial blur. Buttery smooth, and accurate edge handling.
Same corner, but from Paint.NET's radial blur.  Nasty Moire patterns, and problematic handling in the corner - from an algorithm that took 4x longer to run.
Same corner, but from Paint.NET’s radial blur. Nasty Moire patterns, and problematic handling in the corner – from an algorithm that took 4x longer to run.

Zoom Blur

No, that huge green bar is not an error.  GIMP took a whopping 17 minutes to render a 200px zoom blur.  PhotoDemon's "traditional" mode was used to provide comparable output.  Paint.NET does not offer any specialized options for this tool.
No, that huge green bar is not an error. GIMP took a whopping 17 minutes to render a 200px zoom blur. PhotoDemon’s “traditional” mode was used to provide comparable output. Paint.NET does not offer any specialized options for this tool.

Last up is Zoom Blur, and we have a surprising winner! Paint.NET’s zoom blur implementation is excellent – great quality, very fast, and overall a huge improvement from their other blur tools. I have no idea why Zoom Blur is significantly faster than their Gaussian Blur implementation at a comparable pixel size, so I can only assume that some kind of specialized optimizations have been added. Nice work, Paint.NET team!

GIMP… I don’t even know what to say. It’s possible that I triggered some sort of problem with GIMP’s tile-based processing system, because there is no good way to explain a 17-minute processing time for such a straightforward function. Even a naive implementation shouldn’t take anywhere near that long. Their implementation has loops nested five-deep (dear god), and while bilinear interpolation is used to improve output, that algorithm is so poorly written that I frankly think they should consider removing it completely. Even at very low distances, rendering takes forever. The original copyright date on the source file is 1997, so perhaps someone familiar with GIMP’s internals should give this one a second look.

PhotoDemon uses the same trick here as with radial blur. The image is converted to polar coordinates (with swapped x and y values compared to the radial blur conversion), a horizontal blur is applied, then the image is converted back. Again, there is quality loss at low values, and both Paint.NET and GIMP provide better-quality output at very small radii. To mitigate this, I provide a second style on that dialog, which uses an iterative image-sized alpha blend to generate a blur. One of the neat things about that approach is that the image can be zoomed-out as well as zoomed-in.

I doubt there is a legitimate use for zoom-blur-outward like this, but it wasn't any extra work to implement.  :)
I doubt there is a legitimate use for zoom-blur-outward like this, but it wasn’t any extra work to implement. :)

Conclusions

Blur algorithm performance is hugely variable in both GIMP and Paint.NET. I’ll admit – I find it a bit amusing that my little PhotoDemon project, written with a 15-year-old programming language and compiler, outperforms them so handily in multiple areas, despite my implementations being generally lazy, single-threaded, and heavily CPU-bound. I also call “bullshit” on Paint.NET’s claim about “extensive work going into making Paint.NET the fastest image editor available.” I think the Paint.NET team does great work, and their software is a wonderful improvement over many free and paid photo editors, but its performance is greatly lacking in a number of areas.

Then there is GIMP. While I am very grateful for their software, and have learned to love its many quirks, there’s no denying that whole swaths of its source code are in desperate need of a revamp. I imagine there is no point revisiting items like blur until they complete their migration to GEGL – perhaps then we will see big improvements in the performance of these various blur functions.

If there’s a take-home message to all this, it’s that algorithms will always be more important than programming languages. A well-written algorithm in a “slow” language will often outperform a poorly written algorithm in a “fast” language. VB6 may be forgotten and nearly dead, but I’m happy to see it staying competitive with the titans of the “free photo editor” world. :)

If you read the article all the way to here, I hope you’ll give PhotoDemon a look:

http://www.tannerhelland.com/photodemon/#download

For a free, open-source photo editor, it has a lot of nice features, and I can empirically state that it outperforms GIMP and Paint.NET in at least a few areas! (The current nightly build is pretty much how the next stable release (6.0) will look, minus a few minor bugfixes still to complete.)

PhotoDemon 5.4 is live – now with German, French, and Dutch language support

Summary

PhotoDemon 5.4 is complete. New features include language support (German, French, and Dutch), a full-featured batch processing wizard, shadow/highlight correction, nine new distort tools, vignetting, median noise removal, JPEG and PNG optimization, and more. Download it here.

Kaleidoscope is probably the least practical (but most fun!) new tool in 5.4.  :)
Kaleidoscope is probably the least practical (but most fun!) new tool in 5.4. Also, German!

Highlight feature: support for multiple languages!

This is the biggest addition in version 5.4, and I can only claim partial credit for it. Primary credit goes to Frank Donckers, a fellow VB programmer who prototyped the initial translation engine for me. As if that isn’t incredible enough, Frank also supplied the translations for French, German, and Dutch (Flemish), so I owe him an enormous debt of gratitude. Thank you, Frank!

One of the neatest aspects of this feature is the ability to change the language at run-time via the Language menu. Unlike every program I have ever used, no restart is required. PhotoDemon will dynamically change the program’s entire language immediately, and if you change your mind, you can switch to any other language at any time.

I hope these three languages are only the beginning. If you speak a language other than English, please consider contributing a new PhotoDemon translation! No programming knowledge is required, and you will receive full credit for your work. Contact me for more details.

Nine new Distort-style tools

Add and remove lens distortion. Swirl. Ripple. Pinch and whirl. Waves. Kaleidoscope. Polar conversion (both directions). Figured glass (dents).

The new Ripple tool.  All distort tools use resampling for improved image quality, and all provide real-time previews.
The new Ripple tool. All distort tools use resampling for improved image quality, and all provide real-time previews.
The new Figured Glass tool uses Perlin Noise to provide a warped glass look to images.
The new Figured Glass tool uses Perlin Noise to provide a warped glass look to images. (Note: the source image is a promotional photo for ABC’s Once Upon a Time.)

Vastly improved file format support

The new JPEG export dialog.  Optimization is a lossless way to reduce file size - very handy for JPEGs headed to the web.
The new JPEG export dialog. Optimization is a lossless way to reduce file size – very handy for JPEGs headed to the web.

JPEGs now support automatic EXIF rotation on import, and a variety of options on export (Huffman table optimization, progressive scan, thumbnail embedding, specific subsampling). TIFFs support CMYK encoding and a number of compression schemes (none, PackBits, LZW, CCITT 3 and 4, zLib, and more). PNG exporting supports variable compression strength, interlacing, and background color chunk preservation. PPMs can be exported with RAW or ASCII encoding. BMP and TGA files now support RLE encoding. And for icons, animated GIFs, and multipage TIFFs, all images inside a file can now be loaded (instead of just the first one).

These format settings can be accessed from the Tools -> Options menu, and the new Batch Process tool also provides direct access.

Revamped standard tools, including Box Blur, Gaussian Blur, Smart Blur, and Unsharp Masking.

Smart blur can be used to smooth out specific features, like skin, while leaving edges and fine details intact.  (Image of the lovely and talented Rashida Jones, via Glamour)
Smart blur can be used to smooth out specific features, like skin, while leaving edges and fine details intact. (Image of the lovely and talented Rashida Jones, via Glamour)

PhotoDemon is now a much better photo editor, thanks to the revamp of its core convolution filters. Larger tool dialogs make it easier to see the result of your actions. Better performance means real-time previews, even at enormous radii (up to 200px for all filters, plus 500px for box blur!). And all convolution algorithms now use specialized edge handling code to make sure every part of the image – from center to border – is handled correctly.

Also, the program’s Gaussian Blur is now a true gaussian blur. There are no shortcuts, no estimations, and it’s still fast enough to preview in real-time.

New advanced color tools, including Shadow/Midtone/Highlight adjustments, color balancing, and monochrome-to-grayscale recovery

Shadow / Midtone / Highlight correction allows for detailed recovery of light and dark parts of an image.  Thanks to deviantart user deviantsnark for the sample image.
Shadow / Midtone / Highlight correction allows for detailed recovery of light and dark sections of an image. Thanks to dA user deviantsnark for the Borderlands wallpaper.
Color balance provides a per-color way to adjust the hue of an image (versus hue / saturation adjustments, which apply equally to all colors).  Thanks to dA user LadyGT for the beautiful artwork.
Color balance provides a per-color way to adjust the hue of an image (versus hue / saturation adjustments, which apply equally to all colors). Thanks to dA user LadyGT for the beautiful Tomb Raider artwork.

New stylize tools, including Film Grain, Vignetting, Modern Art, Trace Contour, Film Noir, and Comic Book

Vignetting refers to the rounded halo around the edges of the image.  The new tool allows you to add halos of any size, softness (how blurry the edges are), transparency, and color, and it can automatically fit the effect to any aspect ratio.  Thanks to dA user chrismickens for the great Mad Men artwork.
Vignetting refers to the rounded halo around the edges of the image. The new tool allows you to add halos of any size, softness (how blurry the edges are), transparency, and color, and it can automatically fit the effect to any aspect ratio. Thanks to dA user chrismickens for the great Mad Men artwork.
PhotoDemon now allows you to add artificial film grain to any image.  This effect was famously used in the Mass Effect trilogy of games to create a more gritty, realistic look.
PhotoDemon now allows you to add artificial film grain to any image. This effect was famously used in the Mass Effect trilogy to create a more gritty, realistic look.
Contour tracing uses a stack of unique algorithms to "paint" the edges of an image.  It is also a useful edge detection tool.
Contour tracing uses a unique stack of algorithms to “paint” the main features of an image. It is also a useful edge detection tool.

Noise removal via Median Filtering

Median filtering serves two main purposes: removal of image noise (unwanted pixel variance), and recovery of damaged images.  The severely damaged image above is courtesy Wikipedia; the after image is pure PhotoDemon (note that it recovers better than the Wikipedia example!).
Median filtering serves two main purposes: removal of image noise (unwanted pixel variance), and recovery of damaged images. The severely damaged image above is courtesy Wikipedia; the after image is PhotoDemon’s correction (note that it recovers more than the Wikipedia example!)

Automatic image cropping

If an image has empty space around the edges - like this Firefox wallpaper - Autocrop can automatically crop it for you.  The feature supports thresholding, so it works equally well on lossy formats like JPEG.
If an image has empty space around the edges – like this Firefox wallpaper – Autocrop can automatically remove it for you. Autocrop supports thresholding, so it works just fine on JPEGs.

New Batch Process Wizard

If I had to pick a personal “favorite” new feature in this release, it would be the brand-new batch processing wizard. This tool is a highlight of PhotoDemon’s emphasis on usability, and I researched more than a dozen other image batch processing tools while building it. I could be biased, but I believe PhotoDemon is now the best general-purpose image batch processor available on the web.

The first page of the new Batch Process wizard.  This step is by far the most intricate, and a ton of work went into exposing full functionality without overwhelming the user.  To my knowledge, PhotoDemon is the only batch processor that allows you to create your own batch list from any number of source directories spread across any number of drives.
The first page of the new Batch Process wizard. This step is by far the most intricate, and a ton of work went into exposing full functionality without overwhelming the user. To my knowledge, PhotoDemon is the only batch processor that allows you to create your own batch list from any number of source directories spread across any number of drives.

Drag-and-drop is now supported when building the list of images to be processed – not only from within the dialog, by dragging between list boxes, but also from Windows Explorer. Live previews make it much easier to find the images you want, while helpful instructions on the left-hand side expose some of the more nuanced functionality.

Once a list of images has been created, you can optionally choose to apply photo editing actions to each image.  Unlike other batch processors, PhotoDemon allows you to use any photo editing actions provided by the program.
Once a list of images has been created, you can optionally choose to apply photo editing actions to each image. Unlike other batch processors, PhotoDemon allows you to use any photo editing actions provided by the program – not just a tiny subset.

Page 2 is the barest page of the new wizard. The current version allows you to skip photo editing actions (if you want to just do a batch rename or format conversion, for example), or you can apply any recorded macro. In the next release, I will add a set of “one-click” presets for common actions, like resizing, or optimizing images for the web.

Once you've created a list of images and chosen any photo editing actions, an output image format can be set.  New to this version, PhotoDemon can retain original image formats - allowing you to apply actions to mixed PNG/JPEG collections, for example.  Alternatively, you can select a single output format, with access to the program's full range of detailed format settings.
Once you’ve created a list of images and chosen any photo editing actions, an output image format can be set. New to this version, PhotoDemon can retain original image formats – allowing you to apply actions to mixed PNG/JPEG collections, for example.

Page 3 asks you to choose an output format. If you want to retain original image formats, that’s cool too – PhotoDemon now supports this! Alternatively, you can select a single output format, with access to the program’s full range of detailed format settings. In the example above, you can see all the options available for JPEGs, including new support for optimization (lossless file size reduction), thumbnails, progressive encoding, and specific subsampling.

The last step of the wizard asks you to choose a location to save all the processed files.  If desired, a number of rename options are also available.
The last step of the wizard asks you to choose a location to save all the processed files. If desired, a number of rename options are also available.

The final page asks you to select an output folder where PhotoDemon can save the processed images. New to this release is a wide range of renaming options – things like adding custom text to each filename, removing text from each filename, changing case, and replacing spaces with underscores for web-bound images. Additionally, original filenames can be retained, or PhotoDemon can just use ascending numbers.

So that’s the new batch wizard! I’d love feedback from power users, as there are a lot of moving parts to the batch tool, and while I have been very thorough in my own testing, it’s impossible to test every combination of variables. So if you find anything that doesn’t work, please let me know.

Improved features: Gamma Correction, Dilate, Erode, Monochrome Conversion, Histogram and Printing

As is usual with each PhotoDemon update, a number of existing tools received redesigns or new features. Gamma correction now displays live gamma curves, and each color component (red, green, and blue) can be adjusted individually. Dilate and Erode use a new algorithm that’s significantly more optimized, meaning sizes up to 200px radius can be previewed in real-time. Monochrome conversion supports any two color (not just black and white), while the printing and histogram dialogs were completely overhauled to make them more user-friendly.

The new gamma correction dialog.  The old dialog forced users to correct only one channel at a time.  The new one allows for correcting all three, with a live preview of the new curves.  Thanks to dA user Kouken for the Persona fan art.
The new gamma correction dialog. The old dialog forced users to correct only one channel at a time. The new one allows for correcting all three, with a live preview of the new curves. Thanks to dA user Kouken for the Persona fan art.

Universal color depth support at import and export time

PhotoDemon can now write 1, 4, 8, 24, and 32bpp variations of every supported file format. By default, when saving images, color depth detection is completely automated – the program will count the number of colors in an image and automatically select the most appropriate color depth for the output file. Alternatively, you can set a preference to manually specify color depth at save time. This also works for grayscale images; for example, the JPEG encoder will now detect grayscale images and write out 8bpp JPEGs accordingly. Alpha thresholding is also available when saving 32bpp images to 8bpp (e.g. PNG to GIF).

When saving a 32bpp image with a complex alpha channel to a simple format like GIF, the program has to reduce the alpha channel to binary values.  A new threshold dialog helps you find the perfect value.
When saving a 32bpp image with a complex alpha channel to a simple format like GIF, the program has to reduce the alpha channel to binary values. A new threshold dialog helps you find the perfect value.

This feature was a nightmare to implement, as PhotoDemon supports a huge variety of file formats, and each one has a detailed list of color depths it does or does not support. Full support for transparency adds a whole other layer of complexity. But now that the feature is completely implemented and rigorously tested, I can’t imagine it any other way. Color depth is not something users should have to worry about, and automatic handling should be a feature of every photo editor (rather than pestering you for color depth every time you save… *cough* GIMP *cough*).

New feature: pngnq-s9 plugin for optimizing PNG files

At the request of a good friend, PhotoDemon now provides integrated support for the pngnq-s9 variety of the famous pngnq library. For the uninitiated, pngnq provides a way to reduce 32bpp PNG files to 8bpp while still preserving complex alpha channels, allowing for file size reductions of up to 75%. Pngnq provides superior results over other tools by using a neural network to reduce image colors, unlike the brute-force median cut algorithm used by software like pngquant. See here for a gallery of sample images if you’re curious.

Pngnq-s9 is a further improvement over stock pngnq, including cool features like YUV color space matching for better results, and the ability to preserve alpha values of 0 and 255. When saving 32bpp PNG files to 8bpp, PhotoDemon will now lean on pngnq-s9 to do the heavy lifting.

In the next version of PhotoDemon, pngnq-s9 support will be integrated into the batch process wizard as a new “optimize for web” option. For now, if you want to test out the feature, head to Tools -> Options -> Saving, and change the “set outgoing color depth” option to “ask me what color depth I want to use”. Then save a 32bpp PNG image to 8bpp and compare the file size.

New plugin manager and plugin downloader

Sometimes it makes sense for PhotoDemon to use an existing open-source project instead of me writing a new feature from scratch. These support libraries are included as “plugins”, and there are four of them in current version. Each one provides indispensable features (like scanner support) at a fraction of the cost involved to write such a feature from scratch.

Some of these plugins expose additional functionality, but it has always been a challenge for PhotoDemon to expose these additional features to the user. So the program now has a detailed plugin manager, where advanced users can change settings on a per-plugin basis, including activating or deactivating plugins as necessary. The manager also tracks availability and version numbers of each plugin.

It is now much, much easier for the program to keep its plugins up-to-date.  Advanced users may also find it useful to enable or disable plugins while testing various features.  All changes happen in real-time - no restart required.
It is now much, much easier for the program to keep its plugins up-to-date. Advanced users may also find it useful to enable or disable plugins while testing various features. All changes happen in real-time – no restart required.
The pngnq-s9 page of the plugin manager.  Advanced or esoteric plugin features can be adjusted here, which keeps the program's main preferences dialog uncluttered.
The pngnq-s9 page of the plugin manager. Advanced or esoteric plugin features can be adjusted here, which helps keep the main “Options” dialog uncluttered.

Many canvas and interface improvements

Larger effect and tool previews. Persistent zoom-in/zoom-out buttons. Image URLs and files can now be directly pasted as new images. Improved drag/drop support, including drag/drop from common dialogs. New “Safe” save behavior to avoid overwriting original files. New Close All Images menu. New algorithms for auto-zoom when images are loaded, meaning much better results at all screen sizes. Tool and file panels can now be hidden. Higher-quality dynamic icons for the program, taskbar, child windows, and Recent Images list. Improved support for low screen resolutions.

Program-wide performance improvements

More aggressive memory management means lower resource usage. Program loading has been heavily streamlined, and now happens in less than a second on modern hardware. Image loading is much faster and more robust, including better support for damaged or incomplete image files.

More robust and comprehensive error handling

When loading multiple images, the program will now suppress warnings and failures (such as invalid files) until all images have been loaded. Many subclassing issues have been resolved – so no more surprise crashes! Overall this release should be extremely stable.

Many miscellaneous bug fixes and improvements

This article is already way too long, so I won’t bore you with a list of all the minor fixes and improvements. For a full list, see the commit log at https://github.com/tannerhelland/PhotoDemon/commits/master

In Conclusion…

This release was a lot bigger than I’d like future releases to be. The biggest delay came from adding language support, as that affected every piece of text in every part of the program (nearly 10,000 words in total!). Now that language support is complete, I foresee future releases being much tidier and quicker.

A developer’s work is never done, and a roadmap for version 5.6 is already being worked on. Some features that didn’t make the cut for 5.4 – like improvements to the selection tool, or a “smart resize” option – were cut at the last minute, and they will be among the first features added to 5.6. The batch process wizard will see a number of additions, and I’d love to add some advanced multilanguage features, like a way for casual users to fix or adjust translations on-the-fly. I also think I’m finally ready to tackle the monumental task of writing a user manual… should be fun!

As always, the best way to stay abreast of PhotoDemon development is the official code repository at https://github.com/tannerhelland/PhotoDemon

But for now, I hope you enjoy all the new features in 5.4, and please remember to donate if you find the software useful.

Image Dithering: Eleven Algorithms and Source Code

Dithering: An Overview

Today’s graphics programming topic – dithering – is one I receive a lot of emails about, which some may find surprising. You might think that dithering is something programmers shouldn’t have to deal with in 2012. Doesn’t dithering belong in the annals of technology history, a relic of times when “16 million color displays” were something programmers and users could only dream of? In an age when cheap mobile phones operate in full 32bpp glory, why am I writing an article about dithering?

Actually, dithering is still a surprisingly applicable technique, not just for practical reasons (such as preparing a full-color image for output on a non-color printer), but for artistic reasons as well. Dithering also has applications in web design, where it is a useful technique for reducing images with high color counts to lower color counts, reducing file size (and bandwidth) without harming quality. It also has uses when reducing 48 or 64bpp RAW-format digital photos to 24bpp RGB for editing.

And these are just image dithering uses – dithering still has extremely crucial roles to play in audio, but I’m afraid I won’t be discussing audio dithering here. Just image dithering.

In this article, I’m going to focus on three things:

  • a basic discussion of how image dithering works
  • eleven specific two-dimensional dithering formulas, including famous ones like “Floyd-Steinberg”
  • how to write a general-purpose dithering engine

Update 11 June 2016: some of the sample images in this article have been updated to better reflect the various dithering algorithms. Thank you to commenters who noted problems with the previous images!

Dithering: Some Examples

Consider the following full-color image, a wallpaper of the famous “companion cube” from Portal:

This will be our demonstration image for this article.  I chose it because it has a nice mixture of soft gradients and hard edges.
This will be our demonstration image for this article. I chose it because it has a nice mixture of soft gradients and hard edges.

On a modern LCD or LED screen – be it your computer monitor, smartphone, or TV – this full-color image can be displayed without any problems. But consider an older PC, one that only supports a limited palette. If we attempt to display the image on such a PC, it might look something like this:

This is the same image as above, but restricted to a websafe palette.
This is the same image as above, but restricted to a websafe palette.

Pretty nasty, isn’t it? Consider an even more dramatic example, where we want to print the cube image on a black-and-white printer. Then we’re left with something like this:

At this point, the image is barely recognizable.
At this point, the image is barely recognizable.

Problems arise any time an image is displayed on a device that supports less colors than the image contains. Subtle gradients in the original image may be replaced with blobs of uniform color, and depending on the restrictions of the device, the original image may become unrecognizable.

Dithering is an attempt to solve this problem. Dithering works by approximating unavailable colors with available colors, by mixing and matching available colors in a way that mimicks unavailable ones. As an example, here is the cube image once again reduced to the colors of a theoretical old PC – only this time, dithering has been applied:

A big improvement over the non-dithered version!
A big improvement over the non-dithered version!

If you look closely, you can see that this image uses the same colors as its non-dithered counterpart – but those few colors are arranged in a way that makes it seem like many more colors are present.

As another example, here is a black-and-white version of the image with similar dithering applied:

The specific algorithm used on this image is "2-row Sierra" dithering.
The specific algorithm used on this image is “2-row Sierra” dithering.

Despite only black and white being used, we can still make out the shape of the cube, right down to the hearts on either side. Dithering is an extremely powerful technique, and it can be used in ANY situation where data has to be represented at a lower resolution than it was originally created for. This article will focus specifically on images, but the same techniques can be applied to any 2-dimensional data (or 1-dimensional data, which is even simpler!).

The Basic Concept Behind Dithering

Boiled down to its simplest form, dithering is fundamentally about error diffusion.

Error diffusion works as follows: let’s pretend to reduce a grayscale photograph to black and white, so we can print it on a printer that only supports pure black (ink) or pure white (no ink). The first pixel in the image is dark gray, with a value of 96 on a scale from 0 to 255, with zero being pure black and 255 being pure white.

Here is an example of the RGB values in the example.
Here is a visualization of the RGB values in our example.

When converting such a pixel to black or white, we use a simple formula – is the color value closer to 0 (black) or 255 (white)? 96 is closer to 0 than to 255, so we make the pixel black.

At this point, a standard approach would simply move to the next pixel and perform the same comparison. But a problem arises if we have a bunch of “96 gray” pixels – they all get turned to black, and we’re left with a huge chunk of empty black pixels, which doesn’t represent the original gray color very well at all.

Error diffusion takes a smarter approach to the problem. As you might have inferred, error diffusion works by “diffusing” – or spreading – the error of each calculation to neighboring pixels. If it finds a pixel of 96 gray, it too determines that 96 is closer to 0 than to 255 – and so it makes the pixel black. But then the algorithm makes note of the “error” in its conversion – specifically, that the gray pixel we have forced to black was actually 96 steps away from black.

When it moves to the next pixel, the error diffusion algorithm adds the error of the previous pixel to the current pixel. If the next pixel is also 96 gray, instead of simply forcing that to black as well, the algorithm adds the error of 96 from the previous pixel. This results in a value of 192, which is actually closer to 255 – and thus closer to white! So it makes this particular pixel white, and it again makes note of the error – in this case, the error is -63, because 192 is 63 less than 255, which is the value this pixel was forced to.

As the algorithm proceeds, the “diffused error” results in an alternating pattern of black and white pixels, which does a pretty good job of mimicking the “96 gray” of the section – much better just forcing the color to black over and over again. Typically, when we finish processing a line of the image, we discard the error value we’ve been tracking and start over again at an error of “0” with the next line of the image.

Here is an example of the cube image from above with this exact algorithm applied – specifically, each pixel is converted to black or white, the error of the conversion is noted, and it is passed to the next pixel on the right:

This is the simplest possible application of error diffusion dithering.
This is the simplest possible application of error diffusion dithering.

Unfortunately, error diffusion dithering has problems of its own. For better or worse, dithering always leads to a spotted or stippled appearance. This is an inevitable side-effect of working with a small number of available colors – those colors are going to be repeated over and over again, because there are only so many of them.

In the simple error diffusion example above, another problem is evident – if you have a block of very similar colors, and you only push the error to the right, all the “dots” end up in the same place! This leads to funny lines of dots, which is nearly as distracting as the original, non-dithered version.

The problem is that we’re only using a one-dimensional error diffusion. By only pushing the error in one direction (right), we don’t distribute it very well. Since an image has two dimensions – horizontal and vertical – why not push the error in multiple directions? This will spread it out more evenly, which in turn will avoid the funny “lines of speckles” seen in the error diffusion example above.

Two-Dimensional Error Diffusion Dithering

There are many ways to diffuse an error in two dimensions. For example, we can spread the error to one or more pixels on the right, one or more pixels on the left, one or more pixels up, and one or more pixels down.

For simplicity of computation, all standard dithering formulas push the error forward, never backward. If you loop through an image one pixel at a time, starting at the top-left and moving right, you never want to push errors backward (e.g. left and/or up). The reason for this is obvious – if you push the error backward, you have to revisit pixels you’ve already processed, which leads to more errors being pushed backward, and you end up with an infinite cycle of error diffusion.

So for standard loop behavior (starting at the top-left of the image and moving right), we only want to push pixels right and down.

Apologies for the crappy image - but I hope it helps illustrate the gist of proper error diffusion.
Apologies for the crappy image – but I hope it helps illustrate the gist of proper error diffusion.

As for how specifically to propagate the error, a great number of individuals smarter than I have tackled this problem head-on. Let me share their formulas with you.

(Note: these dithering formulas are available multiple places online, but the best, most comprehensive reference I have found is this one.)

Floyd-Steinberg Dithering

The first – and arguably most famous – 2D error diffusion formula was published by Robert Floyd and Louis Steinberg in 1976. It diffuses errors in the following pattern:


       X   7
   3   5   1

     (1/16)

In the notation above, “X” refers to the current pixel. The fraction at the bottom represents the divisor for the error. Said another way, the Floyd-Steinberg formula could be written as:


           X    7/16
   3/16  5/16   1/16

But that notation is long and messy, so I’ll stick with the original.

To use our original example of converting a pixel of value “96” to 0 (black) or 255 (white), if we force the pixel to black, the resulting error is 96. We then propagate that error to the surrounding pixels by dividing 96 by 16 ( = 6), then multiplying it by the appropriate values, e.g.:


           X     +42
   +18    +30    +6

By spreading the error to multiple pixels, each with a different value, we minimize any distracting bands of speckles like the original error diffusion example. Here is the cube image with Floyd-Steinberg dithering applied:

Floyd-Steinberg dithering
Floyd-Steinberg dithering

Not bad, eh?

Floyd-Steinberg dithering is easily the most well-known error diffusion algorithm. It provides reasonably good quality, while only requiring a single forward array (a one-dimensional array the width of the image, which stores the error values pushed to the next row). Additionally, because its divisor is 16, bit-shifting can be used in place of division – making it quite fast, even on old hardware.

As for the 1/3/5/7 values used to distribute the error – those were chosen specifically because they create an even checkerboard pattern for perfectly gray images. Clever!

One warning regarding “Floyd-Steinberg” dithering – some software may use other, simpler dithering formulas and call them “Floyd-Steinberg”, hoping people won’t know the difference. This excellent dithering article describes one such “False Floyd-Steinberg” algorithm:


   X   3
   3   2

   (1/8)

This simplification of the original Floyd-Steinberg algorithm not only produces markedly worse output – but it does so without any conceivable advantage in terms of speed (or memory, as a forward-array to store error values for the next line is still required).

But if you’re curious, here’s the cube image after a “False Floyd-Steinberg” application:

Much more speckling than the legit Floyd-Steinberg algorithm - so don't use this formula!
Much more speckling than the legit Floyd-Steinberg algorithm – so don’t use this formula!

Jarvis, Judice, and Ninke Dithering

In the same year that Floyd and Steinberg published their famous dithering algorithm, a lesser-known – but much more powerful – algorithm was also published. The Jarvis, Judice, and Ninke filter is significantly more complex than Floyd-Steinberg:


             X   7   5 
     3   5   7   5   3
     1   3   5   3   1

           (1/48)

With this algorithm, the error is distributed to three times as many pixels as in Floyd-Steinberg, leading to much smoother – and more subtle – output. Unfortunately, the divisor of 48 is not a power of two, so bit-shifting can no longer be used – but only values of 1/48, 3/48, 5/48, and 7/48 are used, so these values can each be calculated but once, then propagated multiple times for a small speed gain.

Another downside of the JJN filter is that it pushes the error down not just one row, but two rows. This means we have to keep two forward arrays – one for the next row, and another for the row after that. This was a problem at the time the algorithm was first published, but on modern PCs or smartphones this extra requirement makes no difference. Frankly, you may be better off using a single error array the size of the image, rather than erasing the two single-row arrays over and over again.

Jarvis, Judice, Ninke dithering
Jarvis, Judice, Ninke dithering

Stucki Dithering

Five years after Jarvis, Judice, and Ninke published their dithering formula, Peter Stucki published an adjusted version of it, with slight changes made to improve processing time:


             X   8   4 
     2   4   8   4   2
     1   2   4   2   1

           (1/42)

The divisor of 42 is still not a power of two, but all the error propagation values are – so once the error is divided by 42, bit-shifting can be used to derive the specific values to propagate.

For most images, there will be minimal difference between the output of Stucki and JJN algorithms, so Stucki is often used because of its slight speed increase.

Stucki dithering
Stucki dithering

Atkinson Dithering

During the mid-1980’s, dithering became increasingly popular as computer hardware advanced to support more powerful video drivers and displays. One of the best dithering algorithms from this era was developed by Bill Atkinson, a Apple employee who worked on everything from MacPaint (which he wrote from scratch for the original Macintosh) to HyperCard and QuickDraw.

Atkinson’s formula is a bit different from others in this list, because it only propagates a fraction of the error instead of the full amount. This technique is sometimes offered by modern graphics applications as a “reduced color bleed” option. By only propagating part of the error, speckling is reduced, but contiguous dark or bright sections of an image may become washed out.


         X   1   1 
     1   1   1
         1

       (1/8)

Atkinson dithering
Atkinson dithering

Burkes Dithering

Seven years after Stucki published his improvement to Jarvis, Judice, Ninke dithering, Daniel Burkes suggested a further improvement:


             X   8   4 
     2   4   8   4   2

           (1/32)

Burkes’s suggestion was to drop the bottom row of Stucki’s matrix. Not only did this remove the need for two forward arrays, but it also resulted in a divisor that was once again a multiple of 2. This change meant that all math involved in the error calculation could be accomplished by simple bit-shifting, with only a minor hit to quality.

Burkes dithering
Burkes dithering

Sierra Dithering

The final three dithering algorithms come from Frankie Sierra, who published the following matrices in 1989 and 1990:


             X   5   3
     2   4   5   4   2
         2   3   2
           (1/32)


             X   4   3
     1   2   3   2   1
           (1/16)


         X   2
     1   1
       (1/4)

These three filters are commonly referred to as “Sierra”, “Two-Row Sierra”, and “Sierra Lite”. Their output on the sample cube image is as follows:

Sierra (sometimes called Sierra-3)
Sierra (sometimes called Sierra-3)
Two-row Sierra
Two-row Sierra
Sierra Lite
Sierra Lite

Other dithering considerations

If you compare the images above to the dithering results of another program, you may find slight differences. This is to be expected. There are a surprising number of variables that can affect the precise output of a dithering algorithm, including:

  • Integer or floating point tracking of errors. Integer-only methods lose some resolution due to quantization errors.
  • Color bleed reduction. Some software reduces the error by a set value – maybe 50% or 75% – to reduce the amount of “bleed” to neighboring pixels.
  • The threshold cut-off for black or white. 127 or 128 are common, but on some images it may be helpful to use other values.
  • For color images, how luminance is calculated can make a big difference. I use the HSL luminance formula ( [max(R,G,B) + min(R,G,B)] / 2). Others use ([r+g+b] / 3) or one of the ITU formulas. YUV or CIELAB will offer even better results.
  • Gamma correction or other pre-processing modifications. It is often beneficial to normalize an image before converting it to black and white, and whichever technique you use for this will obviously affect the output.
  • Loop direction. I’ve discussed a standard “left-to-right, top-to-bottom” approach, but some clever dithering algorithms will follow a serpentine path, where left-to-right directionality is reversed each line. This can reduce spots of uniform speckling and give a more varied appearance, but it’s more complicated to implement.

For the demonstration images in this article, I have not performed any pre-processing to the original image. All color matching is done in the RGB space with a cut-off of 127 (values <= 127 are set to 0). Loop direction is standard left-to-right, top-to-bottom.

Which specific techniques you may want to use will vary according to your programming language, processing constraints, and desired output.

I count 9 algorithms, but you promised 11! Where are the other two?

So far I’ve focused purely on error-diffusion dithering, because it offers better results than static, non-diffusion dithering.

But for sake of completeness, here are demonstrations of two standard “ordered dither” techniques. Ordered dithering leads to far more speckling (and worse results) than error-diffusion dithering, but they require no forward arrays and are very fast to apply. For more information on ordered dithering, check out the relevant Wikipedia article.

Ordered dither using a 4x4 Bayer matrix
Ordered dither using a 4×4 Bayer matrix
Ordered dither using an 8x8 Bayer matrix
Ordered dither using an 8×8 Bayer matrix

With these, the article has now covered a total of 11 different dithering algorithms.

Writing your own general-purpose dithering algorithm

Earlier this year, I wrote a fully functional, general-purpose dithering engine for PhotoDemon (an open-source photo editor). Rather than post the entirety of the code here, let me refer you to the relevant page on GitHub. The black and white conversion engine starts at line 350. If you have any questions about the code – which covers all the algorithms described on this page – please let me know and I’ll post additional explanations.

That engine works by allowing you to specify any dithering matrix in advance, just like the ones on this page. Then you hand that matrix over to the dithering engine and it takes care of the rest.

The engine is designed around monochrome conversion, but it could easily be modified to work on color palettes as well. The biggest difference with a color palette is that you must track separate errors for red, green, and blue, rather than a single luminance error. Otherwise, all the math is identical.

 

This site - and its many free downloads - are 100% funded by donations. Please consider a small contribution to fund server costs and to help me support my family. Even $1.00 helps. Thank you!

Announcing PhotoDemon 5.2 – Selections, HSL, Rotation, HDR, and More

Summary

PhotoDemon v5.2 is now available. New features include selection tools, arbitrary rotation, HSL adjustments, CMYK support, new user preferences, multiple monitor support, and more. Download the update here.

PhotoDemon 5.2
Version 5.2 includes many new tools and features, including PhotoDemon’s first on-canvas tool – “Selections”.

New Feature: Selection Tool

Selections have been one of the top-requested PhotoDemon features since it first released, so I’m glad to finally be able to offer them. A lot of work went into making selections as user-friendly and powerful as possible.

Three render modes are provided. On-canvas resizing and moving are fully supported, as are adjustments by textbox (see screenshot above). Everything in the Color and Filter menus will operate on a selection if available, as well as the Edit -> Copy command.

(Note: as of this v5.2, selections are not yet tied into Undo/Redo, and selections will not be recorded as part of a Macro. These features will be added in the next release.)

New Feature: Crop to Selection

Finally!

New Feature: HSL Adjustments

PhotoShop and GIMP users should be happy about this tool.

New Feature: Arbitrary (Free) Rotation

Arbitrary rotation comes courtesy of the FreeImage library. A 3-shear method is used: very fast, very high quality.

New Feature: CMY/K Rechanneling

Both CMY and CMYK rechanneling are now available.

New Feature: Sepia (W3C formula)

Here’s the sepia version of the photo from the Rechannel screenshot. I still prefer PhotoDemon’s “Antique” filter for most photos, but this sepia formula (from the W3C spec) provides a pleasant, flat alternative.

New Feature: Preferences Dialog (rewritten from scratch)

Preferences, preferences, and more preferences. The old Preferences dialog was pretty lame, so it was due for an overhaul. Tons of new settings have been added, and they are now organized by category.

New preferences include:

Interface:

  • Render drop shadows between images and canvas (similar to Paint.NET)
  • Full or compact file paths for image windows and Recent File shortcuts
  • Improved font rendering on Vista, Windows 7, and Windows 8 (via Segoe UI)
  • Remember the main window’s location between sessions

Loading and Saving:

  • Tone map imported HDR and RAW images
  • Options for importing all frames or pages of multi-image files (animated GIFs, multipage TIFFs)

Tools:

  • Automatically clear selections after “Crop to Selection” is used

Transparency handling:

  • Pick your own transparency checkerboard colors
  • Pick from three transparency checkerboard sizes (4×4, 8×8, 16×16)
  • Allow PhotoDemon to automatically remove empty alpha channels from imported images

All preferences from v5.0 remain present, and there is now an option to reset all preferences to their default state – so experiment away!

New Feature: Recent File Previews (Vista, Windows 7, Windows 8 only)

Now that recent file previews are available, I honestly can’t use any software that *doesn’t* provide the feature. It makes locating the right file significantly easier – especially with digital camera filenames like IMG_0366.jpg.

New Feature: Multi-Image File Support (animated GIFs, multipage TIFFs)

PhotoDemon will now recognize when you try to load image files that are actually composed of multiple images. You are given the option to import every image, or just the first one (which is what most other software does). The default behavior can be changed in the Edit -> Preferences menu.

New Feature: Waaaay better transparency handling, including adding/removing alpha channels

It’s hard to overstate how much better transparency support is in v5.2 compared to v5.0. Images with alpha-channels are now rendered as alpha in all viewport, filter, and tool screens. When printing, saving as 24bpp, or copying to the clipboard, transparent images are automatically composited against a white background. As mentioned previously, user preferences have been added for transparency checkerboard color and sizes.

PhotoDemon also allows you to add or remove alpha channels entirely. Here’s an example of an image with an alpha channel, and the associated “Image Mode” setting:

Note how the top-level “Mode” icon has changed to match the current mode – this saves you from having to go to the sub-menu to check. I’m a big fan of small touches like this.

And here it is again, after clicking the “Mode -> Photo (RGB | 24bpp | no transparency)” option:

No more alpha!

Finally, PhotoDemon now validates all incoming alpha channels. If an image has a blank or irrelevant alpha channel, PhotoDemon will automatically remove it for you. This frees up RAM, improves performance, and leads to a much smaller file size upon saving. (Note: this feature can be disabled from the Edit -> Preferences menu if you want to maintain blank alpha channels for some reason.)

New Feature: Custom “Confirm Unsaved Image(s)” Prompt

This is the new “unsaved images” prompt in PhotoDemon. A preview is now provided – again, very important for digital photos with obscure names – and the options have been reworked to make them as crystal-clear as possible. Also handy is the “Repeat this action for all unsaved images” option, which will either save or not save all unsaved images per your request.

Improved Feature: Edge Detection

Edge detection now allows for on-black or on-white processing. Generally speaking, on-white is used for artistic purposes, while on-black is used for technical and research ones. (Thanks to Yvonne Strahovski, who appears in the sample image above.)

New Feature: Thermograph Filter

This Wikipedia article describes thermography in great detail. PhotoDemon’s thermography filter works by correlating luminance with heat, and analyzing the image accordingly. Here’s a sample, using a picture of the lovely Alison Brie, of Mad Men and Community fame:

New Feature: JPEG 2000 (JP2/J2K), Industrial Light and Magic (EXR), High-Dynamic Range (HDR) and Digital Fax (G3) image support

PhotoDemon now supports importing the four image types mentioned above, and it also supports JPEG 2000 exporting.

Other New and Improved Features:

  • Much faster resize operations, thanks to an updated FreeImage library (v3.15.4)
  • Multiple monitor support during screen captures (File -> Import -> Screen Capture)
  • Many miscellaneous interface improvements, including generally larger command buttons, text boxes, labels, and more uniform form layouts.
  • Many new and improved menu icons.
  • Heavily optimized viewport rendering. PhotoDemon now uses a triple-buffer rendering pipeline to speed up actions like zooming, scrolling, and using on-canvas tools like the new Selection Tool. Even when working with 32bpp images, all actions render in real-time.
  • Bilinear interpolation is now used during Isometric Conversion. This results in a much higher-quality transform. Hard edges are still left along the image border to make mask generation easy for game designers.
  • Vastly improved image previewing when importing from VB binary files.
  • Better text validation throughout the software. Invalid values are now handled much more elegantly.
  • More accelerator hotkey support, including changes to match Windows standards (such as Ctrl+Y for Redo, instead of the previous Ctrl+Alt+Z).
  • Update checks are now performed every ten days (instead of every time the program is run).
  • All extra program data – including plugins, preferences, saved filters and macros – have been moved to a single /Data subfolder. If you run PhotoDemon on your desktop, this should make things much cleaner for you.
  • PhotoDemon’s current and max memory usage is now displayed in the Preferences -> Advanced panel.
  • Tons of miscellaneous bug fixes, tweaks, and optimizations. For a full list of changes, visit https://github.com/tannerhelland/PhotoDemon/commits/master

In Conclusion…

Not bad for two months work, eh? I hope you enjoy all the new features in 5.2., and please remember to donate if you find the software useful!

Announcing PhotoDemon: A Fast, Free, Open-Source Photo Editor and Image Processor

PhotoDemon screenshot
PhotoDemon v4.2 in the midst of a massive batch conversion (1643 files)

tl;dr – I’ve spent 12 years working on an advanced image processing program. (Think PhotoShop, but without any on-canvas painting tools.) The software is now available under the title “PhotoDemon.” It is fast, free, completely open-source (BSD licensed), and it provides a number of useful features, including macro recording and automated batch conversion. You can download it here.

I can’t often say that a blog post has been 12 years in the making… but believe it or not, this post has taken me that long to write.

Many years ago, when I was but a lowly high school student, I legitimately believed that I alone could produce the world’s greatest video game. It was going to be epic in every possible way – immersive 3D graphics, fully orchestrated musical score, hundreds of pages of witty dialogue. I was going to program the whole thing myself in Visual Basic 6.0, and it was going to be AWESOME.

(ROFL)

This might shock you, but that game never came to fruition.

Fortunately, my delusional teenage aspirations weren’t entirely a waste – I did end up writing many hours of original music for the game, and I also produced a suite of useful development tools. One of those tools was called the GenesisX Image Studio, after my one-man GenesisX Production Company. (Yes, that name sounded cool to my teenage mind.) The purpose of GenesisX Image Studio was to convert 24-bit image files to the game’s custom 8-bit Genesis X Format.

Perhaps you recall, but back in the year 2000 bandwidth was hard to come by, and distributing a game chock full of large 24-bit images over the Internet simply wasn’t feasible. GIF images were still under patent protection so there were concerns about using them, and PNG wasn’t widely known or supported. So I decided to write my own image format, and this was the program capable of converting JPEGs and BMPs to that:

GXF Compressor screenshot
Here’s a screenshot of the GenesisX Image Studio. I know – it burns the eyes a little. Don’t you love the red/black gradient? It seemed so edgy at the time. (facepalm)

While the GXF Compressor was hideous to look at, it included some interesting code, including a rather clever interactive palette editor. That palette editor was at the heart of the Genesis X Format. It worked by taking 256-color images and blending low-frequency colors at a ratio of their occurrences within the image. This way, it was possible to get a 256-color image down to 128 colors or less with very little degradation; the image would then be RLE compressed and optionally zLib compressed, and it was capable of producing downright tiny files.

GXF Palette Editor
The GenesisX Palette Editor. I’m not sure why I felt the need to plaster a bright red copyright message on the form… I’m fairly certain no one was interested in stealing my painfully amateurish code.

When the ultimate game project associated with this software died, I continued to peck away at the image studio, mostly because I enjoyed learning about image processing and the software already provided a framework for things like loading and saving images, zooming and scrolling them, and a rudimentary set of filters. Over time, I eliminated the 256-color feature set and focused only on 16 million color support. Eventually the ridiculous “GenesisX” moniker was dropped, and the project was renamed “DemonSpectre Image Workshop.” (DemonSpectre was my online alias at the time.)

DemonSpectre Image Workshop
By 2002, the project had become slightly less hideous. The red/black gradient was replaced by the blue/black gradient made famous by InstallShield, and a thoroughly useless logo was added to the left-hand side. The code base also grew to include a variety of new filters and processing techniques.

In 2002, Microsoft introduced the first version of Visual Studio .NET, effectively obsoleting the COM-based VB6 overnight. I was in university by then, and had become very aware that VB was not the right language for a programmer who wanted to be taken seriously in the U.S. job market. So I learned C++, java, and Perl, though I retained a love for classic VB, in large part because it was the language that got me into programming in the first place.

The next 8-9 years saw slow, incremental upgrades to the software, usually the result of a random night or weekend when I was fed up with work and needed to focus on something not-work-related. Eventually I renamed the software “VB Photoshop” (no copyright problems there!), then later PhotoDemon, a mash-up of my old DemonSpectre moniker and the fact that the software had grown to focus primarily on photo editing.

In fact, my interest in digital photography led to many of the program’s best features, since I used PhotoDemon to implement tools that other image editing programs lacked or implemented poorly. (I’m looking at you, PhotoShop batch conversion!) Since its inception, PhotoDemon also served as a testbed for my image processing work in other programming languages, because for all its flaws, classic VB is unbeatable as a rapid prototyping language. I still use it for first-implementation tests of obscure features or filters, simply because I can go from pseudocode to real-time implementation in minutes (versus hours in java, and days/months in C). And because VB6 compiles down to native code (unlike the interpreted P-code of earlier versions), it’s perfect for prototyping image processing code, which often needs to execute in real-time.

PhotoDemon v4.2 menu screenshot
PhotoDemon has come a long way from its original GenesisX Image Studio roots. The current version looks quite nice, and it includes features I find lacking in other software – such as extensive accelerator (“hotkey”) support. For those who don’t utilize accelerators, the menus are designed to maximize discoverability. IMO they’re a significant improvement over most image editing software menus.

Because I continued to receive a surprising amount of traffic to my VB-oriented programming site, I would periodically strip interesting features out of PhotoDemon and publish them independently. In fact, most of my open-source programming projects are merely subsets of PhotoDemon’s codebase. (And it’s a surprisingly large codebase – over 30,000 lines – and that’s not including the 3rd-party DLLs it relies on for extra functionality.)

Every now and then, I’ll receive an email from a poor programmer who’s stuck supporting a legacy VB6 application and has consequently stumbled across my site. These emails always brighten my day, and they’re the reason I still provide VB6 projects despite the language being “dead” for more than 10 years. (Although “dead” is a relative term – Microsoft’s extended support lasted until 2008, and they have promised “it just works” compatibility for VB6 applications FOR THE LIFETIME of Windows 8. I know people have their criticisms of Microsoft, but no major tech company is half as good as they are when it comes to supporting legacy software. Hats off to Microsoft for that.)

Occasionally, these emails will ask me if I have a single project that condenses my many image processing techniques into a single piece of software. For ten years, my response to this question has been a vague, teasing, “maybe I do – you’ll have to wait and see!” I’m not sure why I’ve never just tell people about PhotoDemon… probably because they would pester me for copies of the code, and I hate sending out .zip files of large source directories, especially when I haven’t made up my mind about how I want to license said code.

But this summer, as I was sending out yet another one of these vague email responses, it struck me that I’d spent the past ten years hinting at PhotoDemon but never really thinking seriously about when it might live somewhere besides my hard drive. Wasn’t it time to seriously commit to getting the project in a workable state? (Anyone who knows me shouldn’t find this surprising – my motto has always been “better late than never,” and boy does this project meet that definition!)

So I committed, then and there, to getting PhotoDemon into a workable state. My last three months have been spent cleaning up its code base, stripping out useless functions and features, writing documentation, and coaxing it to work with modern Windows visual styles – no small feat, considering VB6 never worked with Windows XP visual styles, let alone Windows 7.

PhotoDemon current version screen shot
PhotoDemon, as it looks in August 2012. Note the use of Windows 7 visual styles, along with full MDI support. Also – no hideous background gradient! :)

Because I’m a glutton for punishment, I also got PhotoDemon working with modern version control software. (Here it is on GitHub.) I wonder if I’m the first person to try and get a massive VB6 codebase working properly with Git… Surprisingly, it does work, though it takes some tweaking thanks to VB’s strange intermixing of text and binary files. Maybe someday I’ll document what I did. Then again, maybe not – I’m not sure I want people trying to set up legacy VB projects with GitHub, lol.

After getting the code to a pleasantly robust state, I put up a preliminary project page for PhotoDemon on this site. That was six weeks ago. Thus far it seems to have been well-received among the VB programmers who frequent my site, and with the help of those programmers, many miscellaneous bugs have been squashed. After a rigorous few weeks of testing, I think PhotoDemon is finally stable enough to warrant broader use.

And that’s why this blog post exists.

Over the next few weeks, possibly months, I plan on releasing a series of “developer diaries” that discuss PhotoDemon’s features and design in detail. I don’t know many projects with a 12-year development time that spans from the developer first learning to program to becoming a professional coder, and I think my experiences could be useful for other young programmers looking to embark on their own open source project. Also, some of PhotoDemon’s more advanced capabilities – such as macro recording and playback – represent unique design challenges, and I think it could be worthwhile to discuss the implementation hurdles I faced in hopes of helping other programmers build such features right on their first try.

PhotoDemon v4.2 print dialog
PhotoDemon’s current interface aims to find that sweet spot between minimalism and power. For example, here’s the print dialog. I find most print dialogs to be woefully over-engineered, so this one provides only the options I use on a regular basis. Also, I just noticed that the “Orientation” label is misaligned vertically. D’oh! Better go fix that…

But for now, here’s what’s worth mentioning: PhotoDemon is stable, and I’d love your feedback on it. It’s designed as a portable app, meaning no installer is required. Just download the .zip, extract it, and run PhotoDemon.exe. (Not a Windows user? PhotoDemon should work with the latest stable release of Wine.)

Input is welcome from programmers and non-programmers alike. To download just the executable, use this link:

Download PhotoDemon (software only, no source code)

If you want the program AND its complete source code, download it from PhotoDemon’s GitHub page:

Download PhotoDemon (with complete source code)

A GitHub account is not required. Simply click the “ZIP” button with the cloud-and-arrow icon to download the source in standard VB6 format. (The ZIP button is just below the project description, in the top-left quadrant of the page.)

Issues can be submitted from the “Help” menu within PhotoDemon, or by visiting the Issues page, or by simply sending me an email.

Stay tuned for posts describing PhotoDemon’s (quite large) feature set in detail, as well as in-depth guides for its advanced features, including macro recording and batch conversion.

Finally, note that PhotoDemon is updated regularly. I tend to make commits on at least a weekly basis, and often more frequently than that. For the most up-to-date version of the software, download it from GitHub.

Thanks for your interest, and I hope you enjoy the software.

Support Linux by Not Writing Linux-Only Software

Phoronix ran a story today about the keynote address at this year’s Fedora India Conference.  The speech can be viewed in its entirety here, but one quote in particular is drawing attention:

The number one enemy we have today is ourselves. And I mean that with all seriousness. Too many times we shoot ourselves in our own foot, by the way we act, the way we deal with people, in our narrowminded-ness that we develop.

The quote and ensuing explanation appears around the 44:00 mark.  It’s worth a watch.

This is a great quote not just for Linux developers and contributors, but for Linux users as well.  It’s especially interesting coming from a Fedora project leader, considering the Fedora project is well-known for its very myopic rules about included software.  (FYI – Fedora does not include any proprietary software, including proprietary drivers, Adobe Flash, Skype, etc.  Is that an example of “shooting yourself in the foot”…?)

In this article, I’m not going talk about the obvious “Linux is its own worst enemy” topics.  Plenty of other people are more qualified to talk about hardware support, FOSS jingoism, obnoxious users, design problems.  No, I’d like to mention something more obscure, but still deserving of attention:

Developing software exclusively for Linux.

Linux-exclusive software should be the exception, not the rule

One of the Linux-centric software projects I’ve followed over the past several years is the OpenShot Video Editor project.  I first discovered OpenShot while researching Linux video editing software in 2009 for a series of Ubuntu articles.  At the time, I considered OpenShot the best choice for default video editor in Ubuntu 10.10.  Canonical eventually went with Pitivi instead, a decision I and many others wondered about.  To quote one article on the topic:

In my view, Ubuntu is doing desktop Linux a huge disservice by putting in basic, buggy tools and then advertising its product as having “video editing” capabilities. The short point is that it hasn’t, and users moving to Ubuntu on the basis of this promise will be bitterly disappointed, tainting their overall view of Linux.

Canonical later rethought this decision; in 11.10 they removed Pitivi from the default install.  The reasoning behind the decision?  According to the linked article:

The lack of ‘polish’ and maturity to the application was also highlighted, with one attendee wondering whether its ‘basic’ nature impacted negatively on the perception of the Ubuntu desktop as a whole.

Ironic, eh?

I don’t mean to slam Pitivi.  Collabora, the company who funds Pitivi’s development, is an incredible contributor to the open-source world and they employ a team of very talented developers.  Seriously – they just released a demo of a video editor built entirely in HTML5.  They’re incredible.

But writing desktop video editing software is tough, especially video editing software provided for free.  Pitivi continues to grow and improve, but it simply wasn’t ready for the big-time in 2009.  That’s okay.

Anyway, OpenShot continues to improve at an impressive rate.  In late 2010, OpenShot crossed what I consider the “maturity line” for an open source project – it began work on a Windows port.  The response to this was mixed, as always.  Many users realized the benefits of making OpenShot cross-platform.  Some, unfortunately, did not.  As one commenter said:

When I’ve seen projects that aim to be multi OS, the Linux version is always the second hand version. More people use Windows so new features are added to the Windows version and no one gives a shit about the Linux users. I have seen it many times, haven’t you?

Why not just use all the time for the Linux version to make it as great as possible. There are already more than enough video editors for Windows anyway.

This is a valid concern.  Take Songbird, for example.  Songbird started out as a promising cross-platform media player with top-tier Linux support.  This lasted four years, until the team made the hard choice to completely drop Linux.  It was an unfortunate loss, but it’s hard to fault the Songbird developers.  They’re a small team and audio support in Linux has never been simple to work with.  (FYI, untested nightly builds are still available for adventurous users.)

But I would argue that projects like Songbird are the exception and not the rule.  While it may seem like Linux-only projects are betraying their loyal base by developing Windows or OSX versions, I would argue that cross-platform development is actually better for Linux as a whole, better for individual software projects and their developers, and ultimately better for Linux users.

Cross-Platform Software Removes One of Linux’s Biggest Barriers of Entry

It’s been said a million times before: one of the hardest things about switching from Windows to Linux is learning new software.  This has gotten easier over time; after all, modern users are probably using the same browser on Windows that they would on Linux, and mature open source projects like LibreOffice, Pidgin, GIMP and Inkscape provide a similar experience regardless of which OS you use.  As we move to a world where more and more software lives within the browser, the switch will get even easier.

But when a new Linux user can’t find a Linux version of software that he or she is used to (Adobe products, MS Office, etc), they suddenly have a very good reason to give up on the platform as a whole.  Even if a Linux alternative is better than whatever they were using before, the fact that it isn’t familiar is often enough to scare them away.

The typical answer to this is: “everything would be better if Adobe and Apple and Microsoft and everyone else just released Linux versions of their software.”  I agree.  That would be better.

But does anyone really think this is going to happen?  Do you really envision a day when you can buy a copy of Microsoft Office for Linux?  I’m afraid I don’t.

So if we can’t force companies to release their software on Linux, we have to do the next best thing – take the best of Linux software and make it available on other platforms.  In the last five years, projects like Firefox and Chrome have done way more to improve Linux adoption than the Linux-only competitors of Epiphany and Konqueror – not because either of those projects are crap (just the opposite, they’re great), but because creating software only for Linux users doesn’t help people make the switch.

Now please don’t misunderstand.  I am absolutely not saying that projects like Epiphany and Konqueror are stupid, or that they don’t serve a purpose, or that they are hurting Linux.  Both are mature, well-written, technical accomplishments from talented contributors, and they definitely fill a niche.

But when it comes to making Linux a viable competitor to OSX or Windows, Firefox and Chrome are the ones to thank.

(Note: yes, I realize that webkit came from KHTML which came from Konqueror.  This doesn’t invalidate my point.)

In the perfect world, Linux users would have access to all the same software as Windows and OSX users.  Releasing a Windows and OSX port of your awesome Linux-only project is a step toward making that happen.

As a Developer, You Will Get More Donations, Support, and Feedback from a Cross-Platform Release

Let’s return to the Songbird example.  In the team’s blog post about why they dropped Linux, they provide the following chart “for perspective”:

It’s hard to argue with those numbers.  Yes, in certain areas (like translations) Linux users contributed more on a per-user basis than Windows or Mac users.  But not that much more.  When you factor in the difficulty of working with audio in Linux, you can see why the Songbird team made the tough decision to drop Linux support entirely.

This same pattern shows up elsewhere.  For gamers, consider the Humble Indie Bundle – Linux users donated 3x more money, per user, than Windows users.  That’s an awesome statistic.  But the sad reality is that there are tons more Windows users, and for the Humble Indie Bundle that meant that revenue from Windows users as a whole was significantly larger than revenue from Linux users.

I point this out only to show that open source developers can receive many benefits – financial and otherwise – from releasing software on as many platforms as possible.  And if you as a developer get more feedback and more money, that will help you produce better software for everyone who uses your products – including your loyal Linux fanbase.

This Isn’t a Major Problem, But It’s Something to Consider

Is Linux-only software the biggest problem facing the open source community?  Hell no.  It’s probably not in the top 10 or 100 or 1000 problems.  But I do think it’s something to point out, especially for Linux projects that seem perpetually close to becoming “great”… only never quite getting there.  Several examples come to mind.

Calligra (formerly KOffice) is a promising open-source office suite developed by KDE.  Personally, I think the project is way ahead of LibreOffice in key areas, particularly the interface.  Consider their word processor, which is one of the few designed with widescreen monitors in mind:

Calligra Words screenshot. Note the very nice use of horizontal real estate. (image courtesy of http://www.calligra-suite.org/words/)

I’m not a Linux software developer, but I would love to contribute to the Calligra project as a tester.  My problem?  I spend most of my time in Windows, and I need a word processor that works in both OSes.  Calligra’s Windows version is in a perpetual state of disarray, and I don’t want the hassle of running my word processor in a VM.  How many testers and contributors is this very cool project missing out on because there is no Windows port?  Would a complex piece of software like LibreOffice or Inkscape be half as good if it had remained Linux-only?  I doubt it.

As another example – Linux video editors.  We’re finally reaching a world where Linux video editors are stable and usable (kudos to the excellent Kdenlive team and OpenShot, among others), but it has taken far, far too long to get here.  In my opinion, the biggest problem is that most major Linux video projects have remained Linux-only.  Kino and Cinelerra have always been in desperate need of testing and feedback, and by ignoring Windows they aren’t doing themselves – or their faithful Linux users – any favors.

Disclaimers

Now I realize that you can’t just click a magical button that makes your Linux-only project compile under Windows and OSX.  I get that.  If you’re an individual developer who doesn’t have the time or the resources to test and compile your code for Windows, I totally understand.  Some projects don’t make sense multi-platform, and sometimes there are very good technical reasons why a project doesn’t make an OSX or Windows version available.

But if you haven’t considered cross-platform support, please do.  Look for help on developer forums or IRC.  Talk to Windows packagers of other open source projects.  Follow OpenShot’s example and ask your userbase for help.  Just don’t fool yourself into thinking that you are helping Linux by not providing Windows and OSX versions of your software.

Because you’re probably not.  By releasing your software for as many OSes as possible, you are not betraying Linux or open source.  You are helping it.  If people think Linux represents a small fraction of overall users now, imagine how much worse it would be if less cross-platform software were available.

As for my fellow Linux users: please don’t troll developers when they decide to go cross-platform.  FOSS is not just about developing for Linux.  Open source can – and should – work to increase the freedom of all users, everywhere, regardless of what operating system they use.  When your favorite piece of Linux software decides to release a Windows version, don’t think of it as betrayal.  Think of it as a way to advertise the benefits of open source to your heathen, Windows-using friends.

As always, I am extremely grateful to the talented individuals that use their time and talents to provide open-source software for free.  Thanks for all your hard work.

Seven grayscale conversion algorithms (with pseudocode and VB6 source code)

I have uploaded a great many image processing demonstrations over the years, but today’s project – grayscale conversion techniques – is actually the image processing technique that generates the most email queries for me.  I’m glad to finally have a place to send those queries!

Despite many requests for a grayscale demonstration, I have held off coding anything until I could really present something unique.  I don’t like adding projects to this site that offer nothing novel or interesting, and there are already hundreds of downloads – in every programming language – that demonstrate standard color-to-grayscale conversions.   So rather than add one more “here’s a grayscale algorithm” article, I have spent the past week collecting every known grayscale conversion routine.  To my knowledge, this is the only project on the Internet that presents seven unique grayscale conversion algorithms, and at least two of the algorithms – custom # of grayscale shades with and without dithering – were written from scratch for this very article.

So without further ado, here are seven unique ways to convert a full-color image to grayscale.  (Note: I highly recommend reading the full article so you understand how the various algorithms work and what their purposes might be, but if all you want is the source code, you’ll find it past all the pictures and just above the donation link.)

Grayscale – An Introduction

Black and white (or monochrome) photography dates back to the mid-19th century.  Despite the eventual introduction of color photography, monochromatic photography remains popular.  If anything, the digital revolution has actually increased the popularity of monochromatic photography because any digital camera is capable of taking black-and-white photographs (whereas analog cameras required the use of special monochromatic film).  Monochromatic photography is sometimes considered the “sculpture” variety of photographic art.  It tends to abstract the subject, allowing the photographer to focus on form and interpretation instead of simply reproducing reality.

Because the terminology black-and-white is imprecise – black-and-white photography actually consists of many shades of gray – this article will refer to such images as grayscale.

Several other technical terms will be used throughout my explanations.  The first is color space.  A color space is a way to visualize a shape or object that represents all available colors.  Different ways of representing color lead to different color spaces.  The RGB color space is represented as a cube, HSL can be a cylinder, cone, or bicone, YIQ and YPbPr have more abstract shapes.  This article will primarily reference the RGB and HSL color spaces.

I will also refer frequently to color channels.  Most digital images are comprised of three separate color channels: a red channel, a green channel, and a blue channel.  Layering these channels on top of each other creates a full-color image.  Different color models have different channels (sometimes the channels are colors, sometimes they are other values like lightness or saturation), but this article will primarily focus on RGB channels.

How all grayscale algorithms fundamentally work

All grayscale algorithms utilize the same basic three-step process:

  1. Get the red, green, and blue values of a pixel
  2. Use fancy math to turn those numbers into a single gray value
  3. Replace the original red, green, and blue values with the new gray value

When describing grayscale algorithms, I’m going to focus on step 2 – using math to turn color values into a grayscale value. So, when you see a formula like this:

Gray = (Red + Green + Blue) / 3

Recognize that the actual code to implement such an algorithm looks like:


For Each Pixel in Image {

   Red = Pixel.Red
   Green = Pixel.Green
   Blue = Pixel.Blue

   Gray = (Red + Green + Blue) / 3

   Pixel.Red = Gray
   Pixel.Green = Gray
   Pixel.Blue = Gray

}

On to the algorithms!

Sample Image:

Promo art for The Secret of Monkey Island: Special Edition, ©2009 LucasArts
This bright, colorful promo art for The Secret of Monkey Island: Special Edition will be used to demonstrate each of our seven unique grayscale algorithms.

Method 1 – Averaging (aka “quick and dirty”)

Grayscale - average method
Grayscale image generated from the formula: Average(Red, Green, Blue)

This method is the most boring, so let’s address it first.  “Averaging” is the most common grayscale conversion routine, and it works like this:

Gray = (Red + Green + Blue) / 3

Fast, simple – no wonder this is the go-to grayscale algorithm for rookie programmers.  This formula generates a reasonably nice grayscale equivalent, and its simplicity makes it easy to implement and optimize (look-up tables work quite well).  However, this formula is not without shortcomings – while fast and simple, it does a poor job of representing shades of gray relative to the way humans perceive luminosity (brightness).  For that, we need something a bit more complex.

Method 2 – Correcting for the human eye (sometimes called “luma” or “luminance,” though such terminology isn’t really accurate)

Grayscale generated using values related to cone density in the human eye
Grayscale generated using a formula similar to (Red * 0.3 + Green * 0.59 + Blue * 0.11)

It’s hard to tell a difference between this image and the one above, so let me provide one more example.  In the image below, method #1 or the “average method” covers the top half of the picture, while method #2 covers the bottom half:

Grayscale methods 1 and 2 compared
If you look closely, you can see a horizontal line running across the center of the image. The top half (the average method) is more washed-out than the bottom half. This is especially visible in the middle-left segment of the image, beneath the cheekbone of the background skull.

The difference between the two methods is even more pronounced when flipping between them at full-size, as you can do in the provided source code.  Now might be a good time to download my sample project (available at the bottom of this article) so you can compare the various algorithms side-by-side.

This second algorithm plays off the fact that cone density in the human eye is not uniform across colors.  Humans perceive green more strongly than red, and red more strongly than blue.  This makes sense from an evolutionary biology standpoint – much of the natural world appears in shades of green, so humans have evolved greater sensitivity to green light.  (Note: this is oversimplified, but accurate.)

Because humans do not perceive all colors equally, the “average method” of grayscale conversion is inaccurate.  Instead of treating red, green, and blue light equally, a good grayscale conversion will weight each color based on how the human eye perceives it.  A common formula in image processors (Photoshop, GIMP) is:

Gray = (Red * 0.3 + Green * 0.59 + Blue * 0.11)

Surprising to see such a large difference between the red, green, and blue coefficients, isn’t it?  This formula requires a bit of extra computation, but it results in a more dynamic grayscale image.  Again, downloading the sample program is the best way to appreciate this, so I recommend grabbing the code, experimenting with it, then returning to this article.

It’s worth noting that there is disagreement on the best formula for this type of grayscale conversion.  In my project, I have chosen to go with the original ITU-R recommendation (BT.709, specifically) which is the historical precedent.  This formula, sometimes called Luma, looks like this:

Gray = (Red * 0.2126 + Green * 0.7152 + Blue * 0.0722)

Some modern digital image and video formats use a different recommendation (BT.601), which calls for slightly different coefficients:

Gray = (Red * 0.299 + Green * 0.587 + Blue * 0.114)

A full discussion of which formula is “better” is beyond the scope of this article.  For further reading, I strongly suggest the work of Charles Poynton.  For 99% of programmers, the difference between these two formulas is irrelevant.  Both are perceptually preferable to the “average method” discussed at the top of this article.

Method 3 – Desaturation

Grayscale generated from a Desaturate algorithm
A desaturated image. Desaturating an image takes advantage of the ability to treat the (R, G, B) colorspace as a 3-dimensional cube. Desaturation approximates a luminance value for each pixel by choosing a corresponding point on the neutral axis of the cube.

Next on our list of methods is desaturation.

There are various ways to describe the color of a pixel.  Most programmers use the RGB color model, where each color is described by its red, green, and blue components.  While this is a nice way for a machine to describe color, the RGB color space can be difficult for humans to visualize.  If I tell you, “oh, I just bought a car.  Its color is RGB(122, 0, 255),” you probably can’t picture the color I’m describing.  If, however, I say, “I just bought a car.  It is a bright, vivid shade of violet,” you can probably picture the color in question.  (Note: this is a hypothetical example.  I do not drive a purple car.  :)

For this reason (among others), the HSL color space is sometimes used to describe colors.  HSL stands for hue, saturation, lightnessHue could be considered the name of the color – red, green, orange, yellow, etc.  Mathematically, hue is described as an angular dimension on the color wheel (range [0,360]), where pure red occurs at 0°, pure green at 120°, pure blue at 240°, then back to pure red at 360°.  Saturation describes how vivid a color is; a very vivid color has full saturation, while gray has no saturation.  Lightness describes the brightness of a color; white has full lightness, while black has zero lightness.

Desaturating an image works by converting an RGB triplet to an HSL triplet, then forcing the saturation to zero. Basically, this takes a color and converts it to its least-saturated variant.  The mathematics of this conversion are more complex than this article warrants, so I’ll simply provide the shortcut calculation.  A pixel can be desaturated by finding the midpoint between the maximum of (R, G, B) and the minimum of (R, G, B), like so:

Gray = ( Max(Red, Green, Blue) + Min(Red, Green, Blue) ) / 2

In terms of the RGB color space, desaturation forces each pixel to a point along the neutral axis running from (0, 0, 0) to (255, 255, 255).  If that makes no sense, take a moment to read this wikipedia article about the RGB color space.

Desaturation results in a flatter, softer grayscale image.  If you compare this desaturated sample to the human-eye-corrected sample (Method #2), you should notice a difference in the contrast of the image.  Method #2 seems more like an Ansel Adams photograph, while desaturation looks like the kind of grayscale photo you might take with a cheap point-and-shoot camera.  Of the three methods discussed thus far, desaturation results in the flattest (least contrast) and darkest overall image.

Method 4 – Decomposition (think of it as de-composition, e.g. not the biological process!)

Decomposition - Max Values
Decomposition using maximum values
Decomposition - Minimum Values
Decomposition using minimum values

Decomposing an image (sounds gross, doesn’t it?) could be considered a simpler form of desaturation.  To decompose an image, we force each pixel to the highest (maximum) or lowest (minimum) of its red, green, and blue values.  Note that this is done on a per-pixel basis – so if we are performing a maximum decompose and pixel #1 is RGB(255, 0, 0) while pixel #2 is RGB(0, 0, 64), we will set pixel #1 to 255 and pixel #2 to 64.  Decomposition only cares about which color value is highest or lowest – not which channel it comes from.

Maximum decomposition:

Gray = Max(Red, Green, Blue)

Minimum decomposition:

Gray = Min(Red, Green, Blue)

As you can imagine, a maximum decomposition provides a brighter grayscale image, while a minimum decomposition provides a darker one.

This method of grayscale reduction is typically used for artistic effect.

Method 5 – Single color channel

Grayscale - red channel only
Grayscale generated by using only red channel values.
Grayscale - green channel only
Grayscale generated by using only green channel values.
Grayscale - blue channel only
Grayscale generated by using only blue channel values.

Finally, we reach the fastest computational method for grayscale reduction – using data from a single color channel.  Unlike all the methods mentioned so far, this method requires no calcuations.  All it does is pick a single channel and make that the grayscale value, as in:

Gray = Red

…or:

Gray = Green

…or:

Gray = Blue

Believe it or not, this shitty algorithm is the one most digital cameras use for taking “grayscale” photos.  CCDs in digital cameras are comprised of a grid of red, green, and blue sensors, and rather than perform the necessary math to convert RGB values to gray ones, they simply grab a single channel (green, for the reasons mentioned in Method #2 – human eye correction) and call that the grayscale one.  For this reason, most photographers recommend against using your camera’s built-in grayscale option.  Instead, shoot everything in color and then perform the grayscale conversion later, using whatever method leads to the best result.

It is difficult to predict the results of this method of grayscale conversion.  As such, it is usually reserved for artistic effect.

Method 6 – Custom # of gray shades

Grayscale using only 4 shades
Grayscale using only 4 shades - black, dark gray, light gray, and white

Now it’s time for the fun algorithms.  Method #6, which I wrote from scratch for this project, allows the user to specify how many shades of gray the resulting image will use.  Any value between 2 and 256 is accepted; 2 results in a black-and-white image, while 256 gives you an image identical to Method #1 above.  This project only uses 8-bit color channels, but for 16 or 24-bit grayscale images (and their resulting 65,536 and 16,777,216 maximums) this code would work just fine.

The algorithm works by selecting X # of gray values, equally spread (inclusively) between zero luminance – black – and full luminance – white.  The above image uses four shades of gray.  Here is another example, using sixteen shades of gray:

Grayscale using 16 shades of gray
In this image, we use 16 shades of gray spanning from black to white

This grayscale algorithm is a bit more complex. It looks something like:


ConversionFactor = 255 / (NumberOfShades - 1)
AverageValue = (Red + Green + Blue) / 3
Gray = Integer((AverageValue / ConversionFactor) + 0.5) * ConversionFactor

Notes:
-NumberOfShades is a value between 2 and 256
-technically, any grayscale algorithm could be used to calculate AverageValue; it simply provides
 an initial gray value estimate
-the "+ 0.5" addition is an optional parameter that imitates rounding the value of an integer
 conversion; YMMV depending on which programming language you use, as some round automatically

I enjoy the artistic possibilities of this algorithm.  The attached source code renders all grayscale images in real-time, so for a better understanding of this algorithm, load up the sample code and rapidly scroll between different numbers of gray shades.

Method 7 - Custom # of gray shades with dithering (in this example, horizontal error-diffusion dithering)

Grayscale - four shades, dithered
This image also uses only four shades of gray (black, dark gray, light gray, white), but it adds full error-diffusion dithering support

Our final algorithm is perhaps the strangest one of all.  Like the previous method, it allows the user to specify any value in the [2,256] range, and the algorithm will automatically calculate the best spread of grayscale values for that range.  However, this algorithm also adds full dithering support.

What is dithering, you ask?  In image processing, dithering uses optical illusions to make an image look more colorful than than it actually is.  Dithering algorithms work by interspersing whatever colors are available into new patterns - ordered or random - that fool the human eye into perceiving more colors than are actually present.  If that makes no sense, take a look at this gallery of dithered images.

There are many different dithering algorithms.  The one I provide is one of the simpler error-diffusion mechanisms: a one-dimensional diffusion that bleeds color conversion errors from left to right.

If you look at the image above, you'll notice that only four colors are present - black, dark gray, light gray, and white - but because these colors are mixed together, from a distance this image looks much sharper than the four-color non-dithered image under Method #6.  Here is a side-by-side comparison:

Side-by-side of dithered and non-dithered 4-color grayscale images
The left side of the image is a 4-shade non-dithered image; the right side is a 4-shade image WITH dithering

When few colors are available, dithering preserves more nuances than a non-dithered image, but the trade-off is a "dirty," speckled look.  Some dithering algorithms are better than others; the one I've used falls somewhere in the middle, which is why I selected it.

As a final example, here is a 16-color grayscale image with full dithering, followed by a side-by-side comparison with the non-dithered version:

Grayscale image, 16 shades, dithered
Hard to believe only 16 shades of gray are used in this image, isn't it?
Grayscale, 16 shades, dithered vs non-dithered
As the number of shades of gray in an image increases, dithering artifacts become less and less noticeable. Can you tell which side of the image is dithered and which is not?

Because the code for this algorithm is fairly complex, I'm going to refer you to the download for details. Simply open the Grayscale.frm file in your text editor of choice, then find the drawGrayscaleCustomShadesDithered sub. It has all the gory details, with comments.

Conclusion

If you're reading this from a slow Internet connection, I apologize for the image-heavy nature of this article.  Unfortunately, the only way to really demonstrate all these grayscale techniques is by showing many examples!

The source code for this project, like all image processing code on this site, runs in real-time.  The GUI is simple and streamlined, automatically hiding and displaying relevant user-adjustable options as you click through the various algorithms:

GUI of the provided source code
GUI of the provided source code. The program also allows you to load your own images.

Each algorithm is provided as a stand-alone method, accepting a source and destination picturebox as parameters.  I designed it this way so you can grab whatever algorithms interest you and drop them straight into an existing project, without need for modification.

Comments and suggestions are welcome.  If you know of any interesting grayscale conversion algorithms I might have missed, please let me know.

(Fun fact: want to convert a grayscale image back to color?  If so, check out my real-time image colorization project.)

 

DISCLAIMER: These download files are regularly scanned to ensure they remain free from malicious content. Unfortunately, some virus scanners will flag these .zip files as suspicious simply because they contain source code and/or executable files. I have submitted my projects to a number of companies in an attempt to rectify these false-positives. Some have been cooperative. Others have not. If your virus scanner alerts you regarding these files, please allow the file to be submitted for further analysis (if your program allows for that). This should help ensure that any false-positive warnings gradually disappear for all users.

This site - and its many free downloads - are 100% funded by donations. Please consider a small contribution to fund server costs and to help me support my family. Even $1.00 helps. Thank you!

Real-time Diffuse (Spread) Image Filter in VB6

One brand of camera diffusion lenses
A set of camera diffusion lenses.

In traditional photography and film, a diffusion filter is used to soften light from a flash or stationary lamp.  Specialized lenses are available for this purpose, but the effect can be cheaply replicated by smearing petroleum jelly over the light (seriously) or by shooting through a sheet of nylon.

In image processing, a diffusion filter often means something else entirely.  Photoshop’s “Diffuse” filter randomly rearranges pixels within a set radius.  (GIMP can do the same thing, but the effect is more accurately titled “Spread.”)  This effect can be animated for a cheap explosion effect – something a number of SNES, Genesis, and DOS games used to great effect.

This project demonstrates a simple, real-time method for replicating such an effect.  All code is commented and reasonably optimized, and an animated “special effect” version is provided for those interested.  Unlike Photoshop, this routine allows you to specify separate horizontal and vertical max random distances, as well as the ability to wrap pixels around image edges.

LittleBigPlanet mini poster
Here's the original image (a poster for LittleBigPlanet)
Here is the same image with a diffuse filter applied (max distance=5)
...and here is the image again, but with max distance = 50
...and one more example. This time, edge wrapping has been enabled. Note the bleed of planet pixels at the top and black pixels at the bottom.

 

DISCLAIMER: These download files are regularly scanned to ensure they remain free from malicious content. Unfortunately, some virus scanners will flag these .zip files as suspicious simply because they contain source code and/or executable files. I have submitted my projects to a number of companies in an attempt to rectify these false-positives. Some have been cooperative. Others have not. If your virus scanner alerts you regarding these files, please allow the file to be submitted for further analysis (if your program allows for that). This should help ensure that any false-positive warnings gradually disappear for all users.

This site - and its many free downloads - are 100% funded by donations. Please consider a small contribution to fund server costs and to help me support my family. Even $1.00 helps. Thank you!

Stained Glass Effect (using VB6 and GIMP)

I wanted to title this article “a novel method for matrix randomization using polygons and custom differential post-processing blending“… but that was a bit long, even for me.

Why such a complex title?

It all started with a strange idea I had today.  I was thinking of common ways to randomize image data (don’t ask why), and it struck me that the most common randomization method – varying RGB data of single pixels – is not the most interesting way to go about it.  Why not use lines, triangles, or other polygons to randomize an image?  How would that look?

To test my theory, I wrote a quick program that selects two random pixels in an image, averages their colors, then draws a line of that averaged color between the two points. When repeated over and over again, such an algorithm leads to some interesting effects…

I started with this God of War 3 image...
...and got this (100,000 iterations of lines with max length 42).
Here's the same image, but with lines of max length 85.

Kinda cool.  I’m not sure what to call this effect… although it looks “furry” to me.  Should we invent a new word – furrification?

Once I had lines working, my next curiosity involved polygons.  Here’s the same picture, but with triangle randomization:

Same parameters as the first line-based randomization above (100,000 iterations, 42 max length).
Same parameters as the second line-based image above (100,000 iterations, max length 85)

This is also a cool effect, especially when you watch it in action.  (The program refreshes the screen every 100 iterations.)

While I didn’t go to the trouble of implementing additional polygons, the code is primed and ready for it.  In fact, it would be trivial to draw polygons of any segment count.

Once I had my newly randomized images, I decided to pop into GIMP and do a bit of post-processing.  It was then that I realized this could be used to create pretty sweet stained glass images:

Sweet!

It’s trivial to create an image like this – simply open up your base image, then add a triangle randomized copy over the top as a new layer.  Set the layer mode to “difference” and bam: stained glass!

Same effect, but with a larger triangle size.

Other blending modes provide interesting effects – for example, multiply:

Same two images as the first stained glass example - just the blending mode has changed.

Anyway, I thought this was an interesting exploration in using a randomized copy of an image as an overlay.

 

DISCLAIMER: These download files are regularly scanned to ensure they remain free from malicious content. Unfortunately, some virus scanners will flag these .zip files as suspicious simply because they contain source code and/or executable files. I have submitted my projects to a number of companies in an attempt to rectify these false-positives. Some have been cooperative. Others have not. If your virus scanner alerts you regarding these files, please allow the file to be submitted for further analysis (if your program allows for that). This should help ensure that any false-positive warnings gradually disappear for all users.

This site - and its many free downloads - are 100% funded by donations. Please consider a small contribution to fund server costs and to help me support my family. Even $1.00 helps. Thank you!