How to use a scanner (or TWAIN-compatible digital camera) in VB6

Today’s project demonstrates how to implement full scanner support from within a VB6 project. As a bonus, it also provides support for TWAIN-compatible digital cameras.

Before we begin

Because VB6 does not include a native scanner library, some sort of third-party DLL is required for scanner access.  My preferred choice is the free, public-domain EZTW32 library, which I have happily used for many years.  (The first version of EZTW32 was released in 1994!)  This project uses the most recent version of EZTW32 at the time of this writing: v1.19, updated 2009.02.22.  You can check for a newer version of the library here.

There are two versions of the EZTW32 library: a free, public-domain library – called “classic” – and a more sophisticated, paid version – called “pro.”  This project utilizes only the classic version.  The paid version includes many advanced features, and if you are interested in anything beyond simply capturing images from a scanner, it may be worth a look.  A full description of the “pro” version’s feature set is available here.

While the EZTW32 library provides many ways of interacting with the scanner, this project will focus on the following:

Private Declare Function TWAIN_IsAvailable Lib "EZTW32.dll" () As Long
Private Declare Function TWAIN_SelectImageSource Lib "EZTW32.dll" (ByVal hwndApp As Long) As Long
Private Declare Function TWAIN_AcquireToFilename Lib "EZTW32.dll" (ByVal hwndApp As Long, _
 ByVal sFile As String) As Long

The sample project includes a copy of v1.19 of the EZTW32 library.  If you would like to download a newer version of the library, simply copy the new version of EZTW32.dll into the same directory as the executable file (or .VBP file).  It should work without a problem.

Bonus tip: how to load DLLs from any location

Because a 3rd-party library is required to access a scanner in VB6, it is useful to know how to load a DLL from any location.  By default, VB6 will attempt to load a DLL from the computer’s system folder.  This is not ideal when developing portable applications (e.g. applications that can be run without requiring an installer), because it requires the user to manually copy files into their system folder… and that’s bad for a variety of reasons (security, potential for mistakes, etc).  It is possible to load a DLL from other locations using the regsvr32 command, but I don’t like regsvr32 because it adds additional entries to the user’s system registry, and it must be re-run every time the project is moved to a new folder.

So the best solution, in my opinion, is to tell VB to expect the DLL to appear in the same folder as the project itself.  (Alternatively, a /plugins/ sub-directory could be used.)  We do this by adding the following code to the General Declarations section of the project:

Private Declare Function LoadLibrary Lib "kernel32" Alias "LoadLibraryA" (ByVal lpLibFileName _
 As String) As Long
Private Declare Function FreeLibrary Lib "kernel32" (ByVal hLibModule As Long) As Long
Dim hLib as Long

Then, in the Form_Load sub, add this code to determine the project directory and tell VB to load any EZTW32 functions from the DLL in that directory:

Dim ProgramPath as String
ProgramPath = App.Path
If Right(ProgramPath, 1) <> "\" Then ProgramPath = ProgramPath & "\"

hLib = LoadLibrary(ProgramPath & "EZTW32.dll")

Finally, add this to the Form_Unload sub to release the DLL when the program terminates:

FreeLibrary hLib

The sample project (available below) demonstrates this technique, so feel free to download that instead of copying-and-pasting the code from this page.

What this project demonstrates

Because this project is focused on the basics of using a scanner, it demonstrates only the following:

  • How to check if the system offers scanner support (e.g. is a scanner driver loaded?)
  • On systems with multiple scanners, allow the user to select which scanner they want to use
  • Load the scanner’s built-in software and allow the user to preview and scan an image
  • Send the scanned image to a temporary file, then load that temporary file into a VB picture box

All of the above also applies to TWAIN-compatible digital cameras, which the software treats just like a scanner.

Finally, as mentioned earlier, EZTW32 has a paid “pro” version that offers additional features. You can learn more about the “pro” version here.

Caveats

As you may have inferred from its title, EZTW32 relies on the TWAIN protocol for accessing a scanner. TWAIN is one of several ways to interact with a scanner; other common options include WIA (Windows Image Acquisition) on Windows, and SANE (Scanner Access Now Easy) on Linux.

TWAIN has some advantages and disadvantages compared to WIA and SANE. One of TWAIN’s unique characteristics is that it requires the scanner to provide its own user interface. The advantage of this approach is that on a given system, TWAIN-compatible programs all launch the same scanner user interface – the interface provided by the scanner itself. This is good for casual users, because regardless of whether they use this program or Photoshop or GIMP, the scanner interface will always be the exact same. The downside is that if you want to implement custom scanner features or options, TWAIN makes it difficult.

Canon scanner user interface
Because TWAIN relies upon the scanner to provide its own user interface, the user will see a different scan window depending on their scanner brand. My Canon scanner software looks like this.

Another advantage of TWAIN is its longevity. The TWAIN standard has been around since 1992, so pretty much every scanner made in the last 20 years will offer TWAIN drivers. By comparison, WIA didn’t exist until the year 2000, and it wasn’t until Windows Vista that most scanners offered WIA support.

One disadvantage of TWAIN is that Windows Vista and Windows 7 give WIA preferential treatment. If you set up a new scanner using the default Windows Hardware Wizard, it may only load WIA drivers – meaning you’ll need to hunt down the install CD that came with your scanner, or download the latest driver bundle from the scanner manufacturer’s website. I discovered this the hard way when testing this program with my Canon all-in-one printer/scanner/fax. If my program can’t find your scanner, download the free GIMP image software from this link and use the File -> Create… -> Scanner/Camera… option. If my program can’t find your scanner, and GIMP can’t find your scanner, you probably don’t have TWAIN drivers installed. If GIMP works but my program does not, send me a message and I’ll investigate further.

Download the sample project

My sample project is pretty minimalist:

Scanner project user interface
The sample project keeps things simple.

I tried to keep the code as small and simple as possible. Again, the latest version of the EZTW32 dll (v1.19) is included in the download. Future versions of the file should be backwards-compatible; simply replace the existing dll with a newer version.

 

DISCLAIMER: These download files are regularly scanned to ensure they remain free from malicious content. Unfortunately, some virus scanners will flag these .zip files as suspicious simply because they contain source code and/or executable files. I have submitted my projects to a number of companies in an attempt to rectify these false-positives. Some have been cooperative. Others have not. If your virus scanner alerts you regarding these files, please allow the file to be submitted for further analysis (if your program allows for that). This should help ensure that any false-positive warnings gradually disappear for all users.

This site - and its many free downloads - are 100% funded by donations. Please consider a small contribution to fund server costs and to help me support my family. Even $1.00 helps. Thank you!

Seven grayscale conversion algorithms (with pseudocode and VB6 source code)

I have uploaded a great many image processing demonstrations over the years, but today’s project – grayscale conversion techniques – is actually the image processing technique that generates the most email queries for me.  I’m glad to finally have a place to send those queries!

Despite many requests for a grayscale demonstration, I have held off coding anything until I could really present something unique.  I don’t like adding projects to this site that offer nothing novel or interesting, and there are already hundreds of downloads – in every programming language – that demonstrate standard color-to-grayscale conversions.   So rather than add one more “here’s a grayscale algorithm” article, I have spent the past week collecting every known grayscale conversion routine.  To my knowledge, this is the only project on the Internet that presents seven unique grayscale conversion algorithms, and at least two of the algorithms – custom # of grayscale shades with and without dithering – were written from scratch for this very article.

So without further ado, here are seven unique ways to convert a full-color image to grayscale.  (Note: I highly recommend reading the full article so you understand how the various algorithms work and what their purposes might be, but if all you want is the source code, you’ll find it past all the pictures and just above the donation link.)

Grayscale – An Introduction

Black and white (or monochrome) photography dates back to the mid-19th century.  Despite the eventual introduction of color photography, monochromatic photography remains popular.  If anything, the digital revolution has actually increased the popularity of monochromatic photography because any digital camera is capable of taking black-and-white photographs (whereas analog cameras required the use of special monochromatic film).  Monochromatic photography is sometimes considered the “sculpture” variety of photographic art.  It tends to abstract the subject, allowing the photographer to focus on form and interpretation instead of simply reproducing reality.

Because the terminology black-and-white is imprecise – black-and-white photography actually consists of many shades of gray – this article will refer to such images as grayscale.

Several other technical terms will be used throughout my explanations.  The first is color space.  A color space is a way to visualize a shape or object that represents all available colors.  Different ways of representing color lead to different color spaces.  The RGB color space is represented as a cube, HSL can be a cylinder, cone, or bicone, YIQ and YPbPr have more abstract shapes.  This article will primarily reference the RGB and HSL color spaces.

I will also refer frequently to color channels.  Most digital images are comprised of three separate color channels: a red channel, a green channel, and a blue channel.  Layering these channels on top of each other creates a full-color image.  Different color models have different channels (sometimes the channels are colors, sometimes they are other values like lightness or saturation), but this article will primarily focus on RGB channels.

How all grayscale algorithms fundamentally work

All grayscale algorithms utilize the same basic three-step process:

  1. Get the red, green, and blue values of a pixel
  2. Use fancy math to turn those numbers into a single gray value
  3. Replace the original red, green, and blue values with the new gray value

When describing grayscale algorithms, I’m going to focus on step 2 – using math to turn color values into a grayscale value. So, when you see a formula like this:

Gray = (Red + Green + Blue) / 3

Recognize that the actual code to implement such an algorithm looks like:


For Each Pixel in Image {

   Red = Pixel.Red
   Green = Pixel.Green
   Blue = Pixel.Blue

   Gray = (Red + Green + Blue) / 3

   Pixel.Red = Gray
   Pixel.Green = Gray
   Pixel.Blue = Gray

}

On to the algorithms!

Sample Image:

Promo art for The Secret of Monkey Island: Special Edition, ©2009 LucasArts
This bright, colorful promo art for The Secret of Monkey Island: Special Edition will be used to demonstrate each of our seven unique grayscale algorithms.

Method 1 – Averaging (aka “quick and dirty”)

Grayscale - average method
Grayscale image generated from the formula: Average(Red, Green, Blue)

This method is the most boring, so let’s address it first.  “Averaging” is the most common grayscale conversion routine, and it works like this:

Gray = (Red + Green + Blue) / 3

Fast, simple – no wonder this is the go-to grayscale algorithm for rookie programmers.  This formula generates a reasonably nice grayscale equivalent, and its simplicity makes it easy to implement and optimize (look-up tables work quite well).  However, this formula is not without shortcomings – while fast and simple, it does a poor job of representing shades of gray relative to the way humans perceive luminosity (brightness).  For that, we need something a bit more complex.

Method 2 – Correcting for the human eye (sometimes called “luma” or “luminance,” though such terminology isn’t really accurate)

Grayscale generated using values related to cone density in the human eye
Grayscale generated using a formula similar to (Red * 0.3 + Green * 0.59 + Blue * 0.11)

It’s hard to tell a difference between this image and the one above, so let me provide one more example.  In the image below, method #1 or the “average method” covers the top half of the picture, while method #2 covers the bottom half:

Grayscale methods 1 and 2 compared
If you look closely, you can see a horizontal line running across the center of the image. The top half (the average method) is more washed-out than the bottom half. This is especially visible in the middle-left segment of the image, beneath the cheekbone of the background skull.

The difference between the two methods is even more pronounced when flipping between them at full-size, as you can do in the provided source code.  Now might be a good time to download my sample project (available at the bottom of this article) so you can compare the various algorithms side-by-side.

This second algorithm plays off the fact that cone density in the human eye is not uniform across colors.  Humans perceive green more strongly than red, and red more strongly than blue.  This makes sense from an evolutionary biology standpoint – much of the natural world appears in shades of green, so humans have evolved greater sensitivity to green light.  (Note: this is oversimplified, but accurate.)

Because humans do not perceive all colors equally, the “average method” of grayscale conversion is inaccurate.  Instead of treating red, green, and blue light equally, a good grayscale conversion will weight each color based on how the human eye perceives it.  A common formula in image processors (Photoshop, GIMP) is:

Gray = (Red * 0.3 + Green * 0.59 + Blue * 0.11)

Surprising to see such a large difference between the red, green, and blue coefficients, isn’t it?  This formula requires a bit of extra computation, but it results in a more dynamic grayscale image.  Again, downloading the sample program is the best way to appreciate this, so I recommend grabbing the code, experimenting with it, then returning to this article.

It’s worth noting that there is disagreement on the best formula for this type of grayscale conversion.  In my project, I have chosen to go with the original ITU-R recommendation (BT.709, specifically) which is the historical precedent.  This formula, sometimes called Luma, looks like this:

Gray = (Red * 0.2126 + Green * 0.7152 + Blue * 0.0722)

Some modern digital image and video formats use a different recommendation (BT.601), which calls for slightly different coefficients:

Gray = (Red * 0.299 + Green * 0.587 + Blue * 0.114)

A full discussion of which formula is “better” is beyond the scope of this article.  For further reading, I strongly suggest the work of Charles Poynton.  For 99% of programmers, the difference between these two formulas is irrelevant.  Both are perceptually preferable to the “average method” discussed at the top of this article.

Method 3 – Desaturation

Grayscale generated from a Desaturate algorithm
A desaturated image. Desaturating an image takes advantage of the ability to treat the (R, G, B) colorspace as a 3-dimensional cube. Desaturation approximates a luminance value for each pixel by choosing a corresponding point on the neutral axis of the cube.

Next on our list of methods is desaturation.

There are various ways to describe the color of a pixel.  Most programmers use the RGB color model, where each color is described by its red, green, and blue components.  While this is a nice way for a machine to describe color, the RGB color space can be difficult for humans to visualize.  If I tell you, “oh, I just bought a car.  Its color is RGB(122, 0, 255),” you probably can’t picture the color I’m describing.  If, however, I say, “I just bought a car.  It is a bright, vivid shade of violet,” you can probably picture the color in question.  (Note: this is a hypothetical example.  I do not drive a purple car.  :)

For this reason (among others), the HSL color space is sometimes used to describe colors.  HSL stands for hue, saturation, lightnessHue could be considered the name of the color – red, green, orange, yellow, etc.  Mathematically, hue is described as an angular dimension on the color wheel (range [0,360]), where pure red occurs at 0°, pure green at 120°, pure blue at 240°, then back to pure red at 360°.  Saturation describes how vivid a color is; a very vivid color has full saturation, while gray has no saturation.  Lightness describes the brightness of a color; white has full lightness, while black has zero lightness.

Desaturating an image works by converting an RGB triplet to an HSL triplet, then forcing the saturation to zero. Basically, this takes a color and converts it to its least-saturated variant.  The mathematics of this conversion are more complex than this article warrants, so I’ll simply provide the shortcut calculation.  A pixel can be desaturated by finding the midpoint between the maximum of (R, G, B) and the minimum of (R, G, B), like so:

Gray = ( Max(Red, Green, Blue) + Min(Red, Green, Blue) ) / 2

In terms of the RGB color space, desaturation forces each pixel to a point along the neutral axis running from (0, 0, 0) to (255, 255, 255).  If that makes no sense, take a moment to read this wikipedia article about the RGB color space.

Desaturation results in a flatter, softer grayscale image.  If you compare this desaturated sample to the human-eye-corrected sample (Method #2), you should notice a difference in the contrast of the image.  Method #2 seems more like an Ansel Adams photograph, while desaturation looks like the kind of grayscale photo you might take with a cheap point-and-shoot camera.  Of the three methods discussed thus far, desaturation results in the flattest (least contrast) and darkest overall image.

Method 4 – Decomposition (think of it as de-composition, e.g. not the biological process!)

Decomposition - Max Values
Decomposition using maximum values
Decomposition - Minimum Values
Decomposition using minimum values

Decomposing an image (sounds gross, doesn’t it?) could be considered a simpler form of desaturation.  To decompose an image, we force each pixel to the highest (maximum) or lowest (minimum) of its red, green, and blue values.  Note that this is done on a per-pixel basis – so if we are performing a maximum decompose and pixel #1 is RGB(255, 0, 0) while pixel #2 is RGB(0, 0, 64), we will set pixel #1 to 255 and pixel #2 to 64.  Decomposition only cares about which color value is highest or lowest – not which channel it comes from.

Maximum decomposition:

Gray = Max(Red, Green, Blue)

Minimum decomposition:

Gray = Min(Red, Green, Blue)

As you can imagine, a maximum decomposition provides a brighter grayscale image, while a minimum decomposition provides a darker one.

This method of grayscale reduction is typically used for artistic effect.

Method 5 – Single color channel

Grayscale - red channel only
Grayscale generated by using only red channel values.
Grayscale - green channel only
Grayscale generated by using only green channel values.
Grayscale - blue channel only
Grayscale generated by using only blue channel values.

Finally, we reach the fastest computational method for grayscale reduction – using data from a single color channel.  Unlike all the methods mentioned so far, this method requires no calcuations.  All it does is pick a single channel and make that the grayscale value, as in:

Gray = Red

…or:

Gray = Green

…or:

Gray = Blue

Believe it or not, this shitty algorithm is the one most digital cameras use for taking “grayscale” photos.  CCDs in digital cameras are comprised of a grid of red, green, and blue sensors, and rather than perform the necessary math to convert RGB values to gray ones, they simply grab a single channel (green, for the reasons mentioned in Method #2 – human eye correction) and call that the grayscale one.  For this reason, most photographers recommend against using your camera’s built-in grayscale option.  Instead, shoot everything in color and then perform the grayscale conversion later, using whatever method leads to the best result.

It is difficult to predict the results of this method of grayscale conversion.  As such, it is usually reserved for artistic effect.

Method 6 – Custom # of gray shades

Grayscale using only 4 shades
Grayscale using only 4 shades - black, dark gray, light gray, and white

Now it’s time for the fun algorithms.  Method #6, which I wrote from scratch for this project, allows the user to specify how many shades of gray the resulting image will use.  Any value between 2 and 256 is accepted; 2 results in a black-and-white image, while 256 gives you an image identical to Method #1 above.  This project only uses 8-bit color channels, but for 16 or 24-bit grayscale images (and their resulting 65,536 and 16,777,216 maximums) this code would work just fine.

The algorithm works by selecting X # of gray values, equally spread (inclusively) between zero luminance – black – and full luminance – white.  The above image uses four shades of gray.  Here is another example, using sixteen shades of gray:

Grayscale using 16 shades of gray
In this image, we use 16 shades of gray spanning from black to white

This grayscale algorithm is a bit more complex. It looks something like:


ConversionFactor = 255 / (NumberOfShades - 1)
AverageValue = (Red + Green + Blue) / 3
Gray = Integer((AverageValue / ConversionFactor) + 0.5) * ConversionFactor

Notes:
-NumberOfShades is a value between 2 and 256
-technically, any grayscale algorithm could be used to calculate AverageValue; it simply provides
 an initial gray value estimate
-the "+ 0.5" addition is an optional parameter that imitates rounding the value of an integer
 conversion; YMMV depending on which programming language you use, as some round automatically

I enjoy the artistic possibilities of this algorithm.  The attached source code renders all grayscale images in real-time, so for a better understanding of this algorithm, load up the sample code and rapidly scroll between different numbers of gray shades.

Method 7 - Custom # of gray shades with dithering (in this example, horizontal error-diffusion dithering)

Grayscale - four shades, dithered
This image also uses only four shades of gray (black, dark gray, light gray, white), but it adds full error-diffusion dithering support

Our final algorithm is perhaps the strangest one of all.  Like the previous method, it allows the user to specify any value in the [2,256] range, and the algorithm will automatically calculate the best spread of grayscale values for that range.  However, this algorithm also adds full dithering support.

What is dithering, you ask?  In image processing, dithering uses optical illusions to make an image look more colorful than than it actually is.  Dithering algorithms work by interspersing whatever colors are available into new patterns - ordered or random - that fool the human eye into perceiving more colors than are actually present.  If that makes no sense, take a look at this gallery of dithered images.

There are many different dithering algorithms.  The one I provide is one of the simpler error-diffusion mechanisms: a one-dimensional diffusion that bleeds color conversion errors from left to right.

If you look at the image above, you'll notice that only four colors are present - black, dark gray, light gray, and white - but because these colors are mixed together, from a distance this image looks much sharper than the four-color non-dithered image under Method #6.  Here is a side-by-side comparison:

Side-by-side of dithered and non-dithered 4-color grayscale images
The left side of the image is a 4-shade non-dithered image; the right side is a 4-shade image WITH dithering

When few colors are available, dithering preserves more nuances than a non-dithered image, but the trade-off is a "dirty," speckled look.  Some dithering algorithms are better than others; the one I've used falls somewhere in the middle, which is why I selected it.

As a final example, here is a 16-color grayscale image with full dithering, followed by a side-by-side comparison with the non-dithered version:

Grayscale image, 16 shades, dithered
Hard to believe only 16 shades of gray are used in this image, isn't it?
Grayscale, 16 shades, dithered vs non-dithered
As the number of shades of gray in an image increases, dithering artifacts become less and less noticeable. Can you tell which side of the image is dithered and which is not?

Because the code for this algorithm is fairly complex, I'm going to refer you to the download for details. Simply open the Grayscale.frm file in your text editor of choice, then find the drawGrayscaleCustomShadesDithered sub. It has all the gory details, with comments.

Conclusion

If you're reading this from a slow Internet connection, I apologize for the image-heavy nature of this article.  Unfortunately, the only way to really demonstrate all these grayscale techniques is by showing many examples!

The source code for this project, like all image processing code on this site, runs in real-time.  The GUI is simple and streamlined, automatically hiding and displaying relevant user-adjustable options as you click through the various algorithms:

GUI of the provided source code
GUI of the provided source code. The program also allows you to load your own images.

Each algorithm is provided as a stand-alone method, accepting a source and destination picturebox as parameters.  I designed it this way so you can grab whatever algorithms interest you and drop them straight into an existing project, without need for modification.

Comments and suggestions are welcome.  If you know of any interesting grayscale conversion algorithms I might have missed, please let me know.

(Fun fact: want to convert a grayscale image back to color?  If so, check out my real-time image colorization project.)

 

DISCLAIMER: These download files are regularly scanned to ensure they remain free from malicious content. Unfortunately, some virus scanners will flag these .zip files as suspicious simply because they contain source code and/or executable files. I have submitted my projects to a number of companies in an attempt to rectify these false-positives. Some have been cooperative. Others have not. If your virus scanner alerts you regarding these files, please allow the file to be submitted for further analysis (if your program allows for that). This should help ensure that any false-positive warnings gradually disappear for all users.

This site - and its many free downloads - are 100% funded by donations. Please consider a small contribution to fund server costs and to help me support my family. Even $1.00 helps. Thank you!

Real-time Diffuse (Spread) Image Filter in VB6

One brand of camera diffusion lenses
A set of camera diffusion lenses.

In traditional photography and film, a diffusion filter is used to soften light from a flash or stationary lamp.  Specialized lenses are available for this purpose, but the effect can be cheaply replicated by smearing petroleum jelly over the light (seriously) or by shooting through a sheet of nylon.

In image processing, a diffusion filter often means something else entirely.  Photoshop’s “Diffuse” filter randomly rearranges pixels within a set radius.  (GIMP can do the same thing, but the effect is more accurately titled “Spread.”)  This effect can be animated for a cheap explosion effect – something a number of SNES, Genesis, and DOS games used to great effect.

This project demonstrates a simple, real-time method for replicating such an effect.  All code is commented and reasonably optimized, and an animated “special effect” version is provided for those interested.  Unlike Photoshop, this routine allows you to specify separate horizontal and vertical max random distances, as well as the ability to wrap pixels around image edges.

LittleBigPlanet mini poster
Here's the original image (a poster for LittleBigPlanet)
Here is the same image with a diffuse filter applied (max distance=5)
...and here is the image again, but with max distance = 50
...and one more example. This time, edge wrapping has been enabled. Note the bleed of planet pixels at the top and black pixels at the bottom.

 

DISCLAIMER: These download files are regularly scanned to ensure they remain free from malicious content. Unfortunately, some virus scanners will flag these .zip files as suspicious simply because they contain source code and/or executable files. I have submitted my projects to a number of companies in an attempt to rectify these false-positives. Some have been cooperative. Others have not. If your virus scanner alerts you regarding these files, please allow the file to be submitted for further analysis (if your program allows for that). This should help ensure that any false-positive warnings gradually disappear for all users.

This site - and its many free downloads - are 100% funded by donations. Please consider a small contribution to fund server costs and to help me support my family. Even $1.00 helps. Thank you!

How to Colorize an Image (in VB6)

“Colorization” in image processing can refer to one of several things. Most commonly, to colorize an image is to take an image without color (like a black and white photograph) and artificially apply color to it. One example of this is the old Three Stooges movies which were originally shot in black-and-white, but re-released several years ago in color. Colorization of an entire movie is expensive and time-consuming, and a lot of human intervention is required to make things look right.

Another form of “colorization” is taking any image – including full-color ones – and colorizing the image for dramatic or artistic effect. This is the type of colorization filter provided by software like Photoshop and GIMP, and it’s also the effect my source code provides.

Enslaved poster - original
Here's the original image (a poster for Enslaved: Odyssey to the West)
Enslaved poster - blue colorization
...and here's the same poster, colorized

Colorization works by retaining certain data about each pixel’s color (luminance and possibly saturation) while ignoring other data about color (hue). In the demonstration above, each pixel in the second picture has the exact same saturation and luminance as the top picture, but all hue values have been replaced with blue.

Different programs implement colorization differently. Most require you to specify hue and saturation values, with luminance being optional. I really like the effect created when you keep the saturation values from the original image. If you force saturation values to an arbitrary number – like Photoshop or GIMP – the colorized image looks either drab or blown-out.

Enslaved poster - orange, original saturation
Here's another colorization - this time to orange. Saturation values are unchanged.
Enslaved poster - orange, 50 percent saturation
Here's the same image, but with saturation forced to 50 percent (Photoshop style). See how the characters and background blur together? The nice contrast between the background buildings and the character on the right is no longer present.
Enslaved poster - orange, 100 percent saturation
...and here's the image again, but with saturation forced to 100 percent. This looks terrible, IMO.

I think the top image in this set offers the most interesting colorization… but since I wrote this code, I could be biased… :)

Full sample code is provided, and – like all code on this site – it’s fast enough to run in real-time.

Colorize program screenshot
Here's a screenshot of the GUI attached to the sample code

Comments and ideas for improvement are always welcome.

 

DISCLAIMER: These download files are regularly scanned to ensure they remain free from malicious content. Unfortunately, some virus scanners will flag these .zip files as suspicious simply because they contain source code and/or executable files. I have submitted my projects to a number of companies in an attempt to rectify these false-positives. Some have been cooperative. Others have not. If your virus scanner alerts you regarding these files, please allow the file to be submitted for further analysis (if your program allows for that). This should help ensure that any false-positive warnings gradually disappear for all users.

This site - and its many free downloads - are 100% funded by donations. Please consider a small contribution to fund server costs and to help me support my family. Even $1.00 helps. Thank you!

How to properly capture the screen in VB6

There are quite a few ways to capture the screen in VB6. I recommend the following method for several reasons:

  1. It’s reliable.  These APIs offer consistent results (unlike some others).
  2. It’s simple.  Each API call serves a logical purpose, and I’ve detailed those below.
  3. It’s fast.  The code can easily be modified to operate on a timer system, for example.

You can download the sample project, or simply copy-and-paste this code into any blank form:


'Screen Capture Demo by Tanner Helland (published 2008, updated 2012)
' http://www.tannerhelland.com

'If you like VB game and graphics code, be sure to subscribe to my RSS feed at
' http://www.tannerhelland.com/feed/

'The required API calls are:

'This call gives us the hWnd (window handle) of the screen
Private Declare Function GetDesktopWindow Lib "user32" () As Long

'This call assigns an hDC (handle of device context) from an hWnd
Private Declare Function GetDC Lib "user32" (ByVal hWnd As Long) As Long

'BitBlt lets us draw an image from a hDC to another hDC (in our case, from an hDC of the screen capture
' to the hDC of a VB picture box)
Private Declare Function BitBlt Lib "gdi32" (ByVal hDC As Long, ByVal x As Long, ByVal y As Long, _
  ByVal nWidth As Long, ByVal nHeight As Long, ByVal hSrcDC As Long, ByVal xSrc As Long, _
  ByVal ySrc As Long, ByVal opCode As Long) As Long

'ReleaseDC will be used to release the screen's hDC once the screen capture is complete.
Private Declare Function ReleaseDC Lib "user32" (ByVal hWnd As Long, ByVal hDC As Long) As Long

'This sample project copies the screen when the form loads; you could also place this code in
' a command button (or any other input)
Private Sub Form_Load()

    'First, minimize this window
    Me.WindowState = vbMinimized
    
    'Get the hWnd of the screen
    Dim scrHwnd As Long
    scrHwnd = GetDesktopWindow
    
    'Now, assign an hDC to the hWnd we generated
    Dim shDC As Long
    shDC = GetDC(scrHwnd)
    
    'Determine the size of the screen
    Dim screenWidth As Long, screenHeight As Long
    screenWidth = Screen.Width \ Screen.TwipsPerPixelX
    screenHeight = Screen.Height \ Screen.TwipsPerPixelY
    
    'Copy the data from the screen hDC to this VB form
    BitBlt FormScreenCapture.hDC, 0, 0, screenWidth, screenHeight, shDC, 0, 0, vbSrcCopy

    'Release our hold on the screen's hDC
    ReleaseDC scrHwnd, shDC
    
    'Set the picture of the form to equal its image
    FormScreenCapture.Picture = FormScreenCapture.Image
    
    'Restore the window
    Me.WindowState = vbNormal

End Sub

 

DISCLAIMER: These download files are regularly scanned to ensure they remain free from malicious content. Unfortunately, some virus scanners will flag these .zip files as suspicious simply because they contain source code and/or executable files. I have submitted my projects to a number of companies in an attempt to rectify these false-positives. Some have been cooperative. Others have not. If your virus scanner alerts you regarding these files, please allow the file to be submitted for further analysis (if your program allows for that). This should help ensure that any false-positive warnings gradually disappear for all users.

This site - and its many free downloads - are 100% funded by donations. Please consider a small contribution to fund server costs and to help me support my family. Even $1.00 helps. Thank you!