A simple algorithm for correcting lens distortion

One of the new features in the development branch of my open-source photo editor is a simple tool for correcting lens distortion. I thought I’d share the algorithm I use, in case others find it useful. (There are very few useful examples of lens correction on the Internet – most articles simply refer to existing software packages, rather than explaining how the software works.)

Lens distortion is a complex beast, and a lot of approaches have been developed to deal with it. Some professional software packages address the problem by providing a comprehensive list of cameras and lenses – then the user just picks their equipment from the list, and the software applies a correction algorithm using a table of hard-coded values. This approach requires way more resources than a small developer like myself could handle, so I chose a simpler solution: a universal algorithm that allows the user to apply their own correction, with two tunable parameters for controlling the strength of the correction.

This is what PhotoDemon's new lens correction tool looks like.
PhotoDemon’s new lens correction tool in action.

The key part of the algorithm is less than ten lines of code, so there’s not much work involved. The effect is also fast enough to preview in real-time.

Before sharing the algorithm, let me demonstrate its output. Here is a sample photo that suffers from typical spherical distortion:

This lovely demonstration photo comes from Wikipedia, courtesy of Ashley Pomeroy
This lovely demonstration photo comes from Wikipedia, courtesy of Ashley Pomeroy

Pay special attention to the lines on the floor and the glass panels on the right.

Here’s the same image, as corrected by the algorithm in this article:

Note the straight lines on both the floor and the glass panels on the right.  Not bad, eh?
Note the straight lines on both the floor and the glass panels on the right. Not bad, eh?

My use of simple bilinear resampling blurs the output slightly; a more sophisticated resampling technique would produce clearer results.

A key feature of the algorithm is that it works at any aspect ratio – rectangular images, like the one above, are handled just fine, as are perfectly square images.

Anyway, here is the required code, as pseudocode:


input:
    strength as floating point >= 0.  0 = no change, high numbers equal stronger correction.
    zoom as floating point >= 1.  (1 = no change in zoom)

algorithm:

    set halfWidth = imageWidth / 2
    set halfHeight = imageHeight / 2
    
    if strength = 0 then strength = 0.00001
    set correctionRadius = squareroot(imageWidth ^ 2 + imageHeight ^ 2) / strength

    for each pixel (x,y) in destinationImage
        set newX = x - halfWidth
        set newY = y - halfHeight

        set distance = squareroot(newX ^ 2 + newY ^ 2)
        set r = distance / correctionRadius
        
        if r = 0 then
            set theta = 1
        else
            set theta = arctangent(r) / r

        set sourceX = halfWidth + theta * newX * zoom
        set sourceY = halfHeight + theta * newY * zoom

        set color of pixel (x, y) to color of source image pixel at (sourceX, sourceY)

That’s all there is to it. Note that you’ll need to do some bounds checking, as sourceX and sourceY may lie outside the bounds of the original image. Note also that sourceX and sourceY will be floating-point values – so for best results, you’ll want to interpolate the color used instead of just clamping sourceX and sourceY to integer values.

I should mention that the algorithm works just fine without the zoom parameter. I added the zoom parameter after some experimentation; specifically, I find zoom useful in two ways:

  • On images with only minor lens distortion, zooming out reduces stretching artifacts at the edges of the corrected image
  • On images with severe distortion, such as true fish-eye photos, zooming-out retains more of the source material

As there is not a universally “correct” solution to these two scenarios, I recommend providing zoom as a tunable parameter. To give a specific example of the second circumstance, consider this fish-eye photo from Wikipedia, courtesy of Josef F. Stuefer:

Severe distortion like this is difficult to correct completely.
Severe distortion like this is difficult to fully correct.

If we attempt to correct the image without applying any zoom, the image must be stretched so far that much of the edges are lost completely:

This is hardly the same photo.  Note also the visible stretching at the edges.
This is hardly the same photo. The pier at the bottom has been completely erased!

By utilizing a zoom parameter, it is possible to include more of the image in the finished result:

Much more of the photo can be preserved by adding a simple zoom parameter to the algorithm.
Use of a zoom parameter allows us to preserve much more of the photo. When correcting severe distortion like this, you might want to apply a sharpening algorithm to the final image. (This sample image has no sharpening applied.)

Again, I only use a simple resampling technique; a more sophisticated one would produce clearer results at the edges.

If you’d like to see my actual source code, check out this GitHub link. The fun begins at line 194. I also include an optional radius parameter, which allows the user to correct only a subset of the image (rather than the entire thing), but other than that the code is identical to what you see above.

Enjoy!

P.S. For a great discussion of fish-eye distortion from a pro photographer’s perspective, check out http://photo.net/learn/fisheye/

Similar Posts:

68 thoughts on “A simple algorithm for correcting lens distortion”

  1. What do you mean by ‘strength’ here? ALso, I did not get the meaning of the input:

    “input:
    strength as floating point >= 0. 0 = no change, high numbers equal stronger correction.
    zoom as floating point >= 1. (1 = no change in zoom)”

    1. Hi Anamay. I’m not sure I understand your question. “Strength” refers to the strength of the correction. This is a user-supplied value. If the algorithm is supplied with a strength value of 0, it will return the original image. As the strength value increases, so does the severity of the correction. (You can think of the correction as working by “pulling” the four corners of the image outward. Higher strength values = more pulling.)

      The easiest way to understand the algorithm would be to see it in action. If it’s not feasible to implement it yourself, you can test it via my open-source photo editor:

      http://photodemon.org/download/

      The tool you want is the Effects -> Distort -> Correct existing lens distortion tool.

      1. Thanks Tanner. I got the answer to my question. One more thing I wanted to ask was that should I take the value of theta in the above program in degrees or in radians?
        I implemented the above code in LabView but I’m getting barrel distortion.
        Thank You

      2. Radians are the appropriate measurement. The only place radian/degrees really matters is the arctangent function, so if for some reason your development tool returns arctangent in degrees, convert it to radians before dividing by r.

  2. Tanner,
    What should I take the value of strength to remove pincushion distortion? I tried many values(positive, negative, fractions) but the distortion is not going.

    1. Unfortunately, this simple algorithm only fixes barrel distortion. You will need a different algorithm for pincushion distortion. My open-source photo editor has the following tool that can fix pincushion distortion:

      Effects -> Distort -> Poke

      If you like it, you can find the source code at this link:

      https://github.com/tannerhelland/PhotoDemon/blob/master/Forms/VBP_FormPoke.frm

      It is similar to this algorithm, but with support for pincushion distortion as well.

    2. When setting color of pixel in the last line, try swapping (x,y) with (sourceX,sourceY)
      such as : dest[sourceX, sourceY] = img[x, y]

      1. I meant that could you explain why you took Correction Radius as sqrt(imagewidth^2+imageheight^2)/strength and r as distance/correctionradius? Also, why did you take theta as arctangent(r)/r though we normally take it as arctangent(y/x)?

      2. The easiest explanation on why those formulas are necessary is: “because they are.” I don’t mean to sound cheeky, but if you change the algorithm, you get a different function. Feel free to substitute other terms to see how it affects the final result.

        “Correction Radius as sqrt(imagewidth^2+imageheight^2) / strength” – this is just the max radius of the image, divided by the strength of the transform. Think of it as a reference for the correction; as the correction strength increases, so does the “radius” of the new image, specifically the distance from the center to a corner.

        “r as distance/correctionradius” is necessary because the strength of the correction is variable for each pixel; specifically, it varies based on the distance between a pixel and the center of the image. Pixels at the outer edges get pushed outward more strongly than those near the center, so we use the ratio between each pixel’s center-distance and the max possible center-distance to determine how we render a given point.

        “why did you take theta as arctangent(r)/r though we normally take it as arctangent(y/x)?” Note that the atan2(y/x) function in most programming languages is different from the standalone single-parameter atan() function. The two parameter variant is used when you want the function to return a quadrant-specific answer. We don’t care about that because directionality is handled by the final sourceX and sourceY calculation. We want a quadrant-agnostic answer, so we use the single-parameter atan function.

  3. I am getting black and white images. The three channels are separating.
    The resulting image contains a part of red channel at its left, green channel in its center and the blue channel at its right.
    Can you explain why is this happening?

  4. Thanks very much for this information
    The algorithm was really helpful for unerstanding the correction of distorsion.
    If you dont’t mind, could you tell me where I can see this algorithm source? like paper or journal

    1. Hi Sangmin. I’m afraid this algorithm is one of my own creation, so it didn’t originate in an academic journal. Here are some academic sources on other, more complex lens distortion functions. Maybe they’ll be helpful to you:

      http://www.sciencedirect.com/science/article/pii/S1077316996904074
      http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=958994&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D958994
      http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1467270&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1467270

      1. Thanks for reply

        Before looking your reply, I tried and applied the algorithm above. The correction reuslt is pretty good. However, I found there is a expansion phenomenon of the side of image. Is it also possible to correct this expansion error??

        Once again, thanks for reply and I will refer your links

      2. Happy to help. The expansion you describe is caused when you forcibly clamp values that fall outside the image boundaries.

        For example, when applying the algorithm to a 100×100 image, some pixels might be mapped to a location like (150, 150) or (-50, -50). When the algorithm generates pixel positions like this, you have a few choices:

        1) Clamp them to the nearest valid value that lies within the image’s boundaries (e.g. clamp (-50, -50) to (0, 0) instead).
        2) Force those pixels to black.

        If you use (2) instead of (1), you will get a result like the screenshot at the top of this page, without the “expansion” you describe.

  5. Hello!

    Very nice work dude! However I tried to do it with Python but it seems that it gives the inverse result. Would you please take a look to my code ? I’ll send it to you through your contact category. Thank you very much :)

    1. Hi Julien. You need to add some kind of “strength” parameter that controls the strength of the adjustment. Without this parameter, the image will not change. (Basically, the function uses the difference between the current radius of each pixel – relative to the image’s center – and a new radius, generated by the “strength” parameter, to know how much correction to apply.)

      As it is, you have an implicit value of strength = 1, which results in no meaningful change to the image.

      So change this:

      correction_angle = (sqrt(((largeur_image)**2) + ((hauteur_image)**2)))

      …to something like this:

      correction_angle = (sqrt(((largeur_image)**2) + ((hauteur_image)**2)) / (strength_goes_here)

  6. Thanks for the pseudocode! I was going to write a more general program, but I was in a hurry and this is good enough! I implemented it in C++, using ImageMagick (Magick++) and accelerated it a bit with OpenMP. Here’s the source:

    http://nrstickley.com/uploads/unwarp.cpp

    It’s not particularly pretty, but it’s good enough for my own use. It’s a place to start. My image processing pipeline assumes that the image will later be cropped, so I didn’t bother fixing the edge problem.

    1. hello Dear Srecko ristic
      i tried to implement this algorithm in C# but i got few issues
      can you help me? if possible please share your code
      Thanks

  7. Hi Tanner
    very thanks for your algorithm
    I implement it with vc++ and the result was very good.
    in your algorithm you give the destination Image Points position to algorithm and Catch the source image points position from it and then copy the color of source image point to destination image point.
    my problem is this:
    I need to compute correct position of some spacial pixels in source image so can you send and algorithm for me that give the source image point to it and it determined the final position of it in the destination Image ?

    1. Hi Colin. R is the distance between each pixel and the center of the image. A pixel with a “high” r value receives more correction than a pixel with a “low” r value.

      1. Sir, the actual source code page is not accessible. It it possible to provide any other link for the same. Thanks in advance.

  8. set correctionRadius = squareroot(imageWidth ^ 2 + imageHeight ^ 2) / strength. this line would be lead to divided by zero exception if the strength is set to zero. make it clear how to read the pixel left to right or top to bottom.

    I have used above algorithm, and set strength=0.0 and zoom =1.0, would not be produce original image, this what you had said. my image was totally collapsed.

    1. Hi Rajesh. The above pseudocode deliberately omits error-checking, but in case it’s helpful, I’ve added an easy check for strength = 0 to avoid DBZ. Obviously, just set strength to some arbitrarily small value and you’ll be fine.

      The order in which you read pixels is totally irrelevant.

      If your implementation does not reproduce the original image with strength = 0 and zoom = 1, you have probably coded something incorrectly. Feel free to share your code if you’d like more detailed feedback.

  9. Sir could you explain what does the if loop do in your algorithm.
    if r = 0 then
    set theta = 1
    else
    set theta = arctangent(r) / r

    and what is the mathematical significance of theta?

    if the image is a matrix of m*n, what matrix operations should i do to dewarp the image?

    1. Hi Aston. Without that If statement, you’ll cause a divide-by-zero error when calculating theta if r = 0 (as it may be in the exact center of the image). When this happens, we set theta to 1 so that the original pixel(s) in that location are copied as-is.

      I’m not sure what you mean by “mathematical significance” of theta. Theta increases according to two conditions: a pixel’s distance from the center of the image, and it’s angle relative to the horizontal and vertical axes. Pixels along the diagonal are corrected most, while pixels on the horizontal/vertical axis (where the image’s center is treated as (0, 0)) are corrected least. This is what causes the corners of the image to “pull outward”, causing the correction.

  10. Hi Tanner.Your code is very helpful,but I’m facing a small problem.I’m applying lens correction on a .RAW extension file.After the correction,image appears to be distorted.There are some circular patterns on the image.
    Is it wrong if apply lens correction on a bayer image rather than rgb color image??
    To remove the circular patterns,I applied bilinear interpolation,but the resulting image shows hardly any improvements.

    Thanks and Regards

    1. Hi Krishna. When correcting distortion, the most important thing is to make sure you apply identical correction math to each pixel unit as a whole. Bayer filters have different color layouts (RGGB, GRGB, etc) so you need to modify the code to make sure that each “whole pixel unit” is corrected identically.

      Otherwise, you risk the R value being corrected to a different location as the G values or B value, which could cause the colors to separate, creating the circular patterns you describe. Hope that helps!

  11. Hi Tanner. I have one more doubt. A silly question. Is it necessary that the image has to be square for better results??
    Thank you so much in advance.

    1. Good to hear from you again. :) Congratulations on getting your function working!

      With this algorithm, the image doesn’t need to be a square. X and Y are calculated independently against their maximum possible value (“halfWidth” and “halfHeight” in the pseudocode), so the distortion correction will automatically scale to match the X and Y dimensions of the image.

      If the image has circular distortion, but the image shape is *not* a square, you can set halfWidth and halfHeight to the same value to enforce circular correction. Maybe something like this:

      If (halfWidth < halfHeight) Then halfHeight = halfWidth Else halfWidth = halfHeight

      That would take the minimum of width vs height and use it to calculate both X and Y, giving you a circular correction regardless of image shape.

      1. Hi Tanner
        The circular distortion has been completed removed irrespective of the image shape.But,the issue is there are some slightly curved lines especially towards the right bottom part of the image even after executing the lens correction. I tried taking the image as square to make it proper but it didn’t help. Any suggestions??

        Thank you so much in advance :)

  12. Hello ! Thanks for your pseudocode, it helped a lot !
    I noticed that the higher strenght is, the image is more and more cropped. I would like to know if you have a way to know the size of the new image as a function of the strenght and the size of the original image.
    Also, could you give some details on the bilateral interpolation you’re using, or propose another interpolation to cancel the blurring effect ?

    1. Armand…. Bicubic interpolation produced better results than bilateral for me…. The circular edges appeared almost perfectly when compared to bilateral…

  13. From where have you started taking index values x,y?
    Is it from the middle of the image or from the left top?

  14. The mathematics/modelling behind the solution presented here is discussed in paper of ISHII C., SUDO Y., HASHIMOTO H.: ‘An image conversion
    algorithm from fish eye image to perspective image for
    human eyes’. Proc. IEEE/ASME Int. Conf. Advanced
    Intelligent Mechatronics, Port Island, Kobe, Japan, July
    2003, vol. 2, pp. 1009–1014

    1. Also in, Hughes, C., Glavin, M., Jones, E., and Denny, P. (2009).
      Wide-angle camera technology for automotive applications: a review. IET Intelligent
      Transport Systems, 3(1):19–31

  15. Hi Tanner! Is this algorithm fast enough to work with HD video (1280×720) in a real time? Also, I use 640×480 resolution of the Logitech camera c270 which has bigger resolution originally and it looks like for 640×480 they used upper left (?) corner of the matrix. So, I suppose that distortion is not symmetric in this case. Will this algorithm work with it good or I need to add some correction in it? Thanks a lot!

    1. Hi Mike. Yes, I actually designed this algorithm explicitly for real-time usage. If the lens is fixed, you can precalculate all (x, y) positions for the transformation and store them in a table. Then this becomes an O(n) operation (which is easily parallelized, no less!) which is the best you can hope for in real-time processing.

      If the distortion is not symmetric, you’ll need to change the center point used by the correction equation. Specifically, the lines:

      set newX = x - halfWidth
      set newY = y - halfHeight

      …need to change. (For a center of (0, 0), you should be able to simply remove the “- halfWidth/Height” portions.)

  16. Thank you for the algorithm, I coded Purebasic
    it works well :)
    however, the distortion is X AND Y

    Now, I am able to change the value in the X and / or Y independently
    as do I take?

  17. currently my code is:

    For x=1 to image_file\Largeur_image-1
    For y=1 to image_file\hauteur_image-1
    newX.d = x – halfWidth.d
    newY.d = y – halfHeight.d
    distance.d = (pow(newX.d ,2) +pow( newY.d , 2))
    r.d = (distance.d / correctionRadius.d)

    if r.d 0

    theta.d= ATan(r.d )/r.d
    sourceX = halfWidth.d + (theta.d * newX.d * image_file\pos_curs_zoom)
    sourceY = halfHeight.d + ((theta.d ) * newY.d * image_file\pos_curs_zoom)

    if (sourceX>1 and sourceY>1) and (sourceX<image_file\largeur_image and sourceY<image_file\hauteur_image)
    plot (x,y,Image_or(sourceX,sourceY))
    Endif
    Endif

    Next y
    SetGadgetState(#progress_calcul, x)
    ; UpdateWindow_(GadgetID(#progress_calcul))
    WaitWindowEvent(2)
    Next x

  18. Hi Tanner,

    Is there a thumb rule to decide the value of strength which we are going to use depending upon the lens used to photograph the object. For example I used a Nikon D750 camera with 28mm and 55mm lens to photograph the same object. What value of strength should I use for the two shots.

  19. Absolutely outstanding, thank you for this. Question: I have a situation where barrel distorted images are already getting processed to find the location of a blob and then I’m getting that distorted value. How could I take this position and run it through a variation of your solution to then get the “correct” location? Thanks for any help on this.

    1. Hi Bob. Thank you for the kind words.

      This algorithm wasn’t designed for barrel-distortion, specifically (and in fact, there are other, better algorithms for that). But as a general rule, any algorithm can be modified to operate on a specific location.

      In the case of the algorithm above, you’ll notice that x and y are remapped around the center of the image (using the “halfWidth” and “halfHeight” values). Simply point those at some other “center” point, and the algorithm will automatically base its calculations off that, instead.

      You can also limit the radius used to prevent correction beyond a certain distance. (This is nice for dealing with certain types of lens aberrations, where only a portion of the lens is affected, for example.)

      Anyway, I don’t know if any of this is useful, but maybe it’s a start. This article has produced a lot of questions about correcting barrel distortion, specifically, so maybe it’s time for me to write a “Part 2″… :)

    1. Great! I tried it in MatLab. Works fine. Thank You.
      clc
      clear all
      close all
      I=imread(‘xxxx.JPG’);
      J=rgb2gray(I);
      [h w]=size(J);
      hh=h/2;
      hw=w/2;
      strength=2;
      r1= (h^2+ w^2)^(1/2)/strength;
      K=uint8(zeros(h,w,3));
      for ii = 1:h
      for jj = 1:w
      newx= ii-hw;
      newy=jj-hh;
      dist= (newx^2+ newy^2)^(1/2);
      r= dist/r1;
      if (r==0)
      theta=1;
      else
      theta = atan(r)/r;
      end
      sourcex=round( hw+ theta *newx);
      sourcey= round(hh +theta *newy);

      K(ii,jj,1)= I(sourcex,sourcey,1);
      K(ii,jj,2)= I(sourcex,sourcey,2);
      K(ii,jj,3)= I(sourcex,sourcey,3);

      end
      end
      figure;
      imshow(K);

  20. This algorithm is very good, but I don’t understand the principle of the algorithm, do you have a document to explain the algorithm? thanks

Leave a Reply