One of the new features in the development branch of my open-source photo editor is a simple tool for correcting lens distortion. I thought I’d share the algorithm I use, in case others find it useful. (There are very few useful examples of lens correction on the Internet – most articles simply refer to existing software packages, rather than explaining how the software works.)

Lens distortion is a complex beast, and a lot of approaches have been developed to deal with it. Some professional software packages address the problem by providing a comprehensive list of cameras and lenses – then the user just picks their equipment from the list, and the software applies a correction algorithm using a table of hard-coded values. This approach requires way more resources than a small developer like myself could handle, so I chose a simpler solution: a universal algorithm that allows the user to apply their own correction, with two tunable parameters for controlling the strength of the correction.

The key part of the algorithm is less than ten lines of code, so there’s not much work involved. The effect is also fast enough to preview in real-time.

Before sharing the algorithm, let me demonstrate its output. Here is a sample photo that suffers from typical spherical distortion:

Pay special attention to the lines on the floor and the glass panels on the right.

Here’s the same image, as corrected by the algorithm in this article:

My use of simple bilinear resampling blurs the output slightly; a more sophisticated resampling technique would produce clearer results.

A key feature of the algorithm is that it works at any aspect ratio – rectangular images, like the one above, are handled just fine, as are perfectly square images.

Anyway, here is the required code, as pseudocode:

```
```
input:
strength as floating point >= 0. 0 = no change, high numbers equal stronger correction.
zoom as floating point >= 1. (1 = no change in zoom)
algorithm:
set halfWidth = imageWidth / 2
set halfHeight = imageHeight / 2
if strength = 0 then strength = 0.00001
set correctionRadius = squareroot(imageWidth ^ 2 + imageHeight ^ 2) / strength
for each pixel (x,y) in destinationImage
set newX = x - halfWidth
set newY = y - halfHeight
set distance = squareroot(newX ^ 2 + newY ^ 2)
set r = distance / correctionRadius
if r = 0 then
set theta = 1
else
set theta = arctangent(r) / r
set sourceX = halfWidth + theta * newX * zoom
set sourceY = halfHeight + theta * newY * zoom
set color of pixel (x, y) to color of source image pixel at (sourceX, sourceY)

That’s all there is to it. Note that you’ll need to do some bounds checking, as sourceX and sourceY may lie outside the bounds of the original image. Note also that sourceX and sourceY will be floating-point values – so for best results, you’ll want to interpolate the color used instead of just clamping sourceX and sourceY to integer values.

I should mention that the algorithm works just fine without the zoom parameter. I added the zoom parameter after some experimentation; specifically, I find zoom useful in two ways:

- On images with only minor lens distortion, zooming out reduces stretching artifacts at the edges of the corrected image
- On images with severe distortion, such as true fish-eye photos, zooming-out retains more of the source material

As there is not a universally “correct” solution to these two scenarios, I recommend providing zoom as a tunable parameter. To give a specific example of the second circumstance, consider this fish-eye photo from Wikipedia, courtesy of Josef F. Stuefer:

If we attempt to correct the image without applying any zoom, the image must be stretched so far that much of the edges are lost completely:

By utilizing a zoom parameter, it is possible to include more of the image in the finished result:

Again, I only use a simple resampling technique; a more sophisticated one would produce clearer results at the edges.

If you’d like to see my actual source code, check out this GitHub link. The fun begins at line 194. I also include an optional radius parameter, which allows the user to correct only a subset of the image (rather than the entire thing), but other than that the code is identical to what you see above.

Enjoy!

P.S. For a great discussion of fish-eye distortion from a pro photographer’s perspective, check out http://photo.net/learn/fisheye/

Beautiful and simple, thank you! :)

You’re very welcome!

What do you mean by ‘strength’ here? ALso, I did not get the meaning of the input:

“input:

strength as floating point >= 0. 0 = no change, high numbers equal stronger correction.

zoom as floating point >= 1. (1 = no change in zoom)”

Hi Anamay. I’m not sure I understand your question. “Strength” refers to the strength of the correction. This is a user-supplied value. If the algorithm is supplied with a strength value of 0, it will return the original image. As the strength value increases, so does the severity of the correction. (You can think of the correction as working by “pulling” the four corners of the image outward. Higher strength values = more pulling.)

The easiest way to understand the algorithm would be to see it in action. If it’s not feasible to implement it yourself, you can test it via my open-source photo editor:

http://photodemon.org/download/

The tool you want is the Effects -> Distort -> Correct existing lens distortion tool.

Thanks Tanner. I got the answer to my question. One more thing I wanted to ask was that should I take the value of theta in the above program in degrees or in radians?

I implemented the above code in LabView but I’m getting barrel distortion.

Thank You

Radians are the appropriate measurement. The only place radian/degrees really matters is the arctangent function, so if for some reason your development tool returns arctangent in degrees, convert it to radians before dividing by r.

Tanner,

What should I take the value of strength to remove pincushion distortion? I tried many values(positive, negative, fractions) but the distortion is not going.

Unfortunately, this simple algorithm only fixes barrel distortion. You will need a different algorithm for pincushion distortion. My open-source photo editor has the following tool that can fix pincushion distortion:

Effects -> Distort -> Poke

If you like it, you can find the source code at this link:

https://github.com/tannerhelland/PhotoDemon/blob/master/Forms/VBP_FormPoke.frm

It is similar to this algorithm, but with support for pincushion distortion as well.

Hi anamy

I am really interested in your LabVIEW implementation of this algorithm .

Can u please send me an email

Hamid.yazdi@yahoo.com

Thanks

Hamid

When setting color of pixel in the last line, try swapping (x,y) with (sourceX,sourceY)

such as : dest[sourceX, sourceY] = img[x, y]

Hi Tanner. I would like to know what is the physical significance of correction radius, and r.

There is no physical significance, to my knowledge. :)

I meant that could you explain why you took Correction Radius as sqrt(imagewidth^2+imageheight^2)/strength and r as distance/correctionradius? Also, why did you take theta as arctangent(r)/r though we normally take it as arctangent(y/x)?

The easiest explanation on why those formulas are necessary is: “because they are.” I don’t mean to sound cheeky, but if you change the algorithm, you get a different function. Feel free to substitute other terms to see how it affects the final result.

“Correction Radius as sqrt(imagewidth^2+imageheight^2) / strength” – this is just the max radius of the image, divided by the strength of the transform. Think of it as a reference for the correction; as the correction strength increases, so does the “radius” of the new image, specifically the distance from the center to a corner.

“r as distance/correctionradius” is necessary because the strength of the correction is variable for each pixel; specifically, it varies based on the distance between a pixel and the center of the image. Pixels at the outer edges get pushed outward more strongly than those near the center, so we use the ratio between each pixel’s center-distance and the max possible center-distance to determine how we render a given point.

“why did you take theta as arctangent(r)/r though we normally take it as arctangent(y/x)?” Note that the atan2(y/x) function in most programming languages is different from the standalone single-parameter atan() function. The two parameter variant is used when you want the function to return a quadrant-specific answer. We don’t care about that because directionality is handled by the final sourceX and sourceY calculation. We want a quadrant-agnostic answer, so we use the single-parameter atan function.

I am getting black and white images. The three channels are separating.

The resulting image contains a part of red channel at its left, green channel in its center and the blue channel at its right.

Can you explain why is this happening?

Thanks very much for this information

The algorithm was really helpful for unerstanding the correction of distorsion.

If you dont’t mind, could you tell me where I can see this algorithm source? like paper or journal

Hi Sangmin. I’m afraid this algorithm is one of my own creation, so it didn’t originate in an academic journal. Here are some academic sources on other, more complex lens distortion functions. Maybe they’ll be helpful to you:

http://www.sciencedirect.com/science/article/pii/S1077316996904074

http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=958994&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D958994

http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1467270&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1467270

Thanks for reply

Before looking your reply, I tried and applied the algorithm above. The correction reuslt is pretty good. However, I found there is a expansion phenomenon of the side of image. Is it also possible to correct this expansion error??

Once again, thanks for reply and I will refer your links

Happy to help. The expansion you describe is caused when you forcibly clamp values that fall outside the image boundaries.

For example, when applying the algorithm to a 100×100 image, some pixels might be mapped to a location like (150, 150) or (-50, -50). When the algorithm generates pixel positions like this, you have a few choices:

1) Clamp them to the nearest valid value that lies within the image’s boundaries (e.g. clamp (-50, -50) to (0, 0) instead).

2) Force those pixels to black.

If you use (2) instead of (1), you will get a result like the screenshot at the top of this page, without the “expansion” you describe.

Hello!

Very nice work dude! However I tried to do it with Python but it seems that it gives the inverse result. Would you please take a look to my code ? I’ll send it to you through your contact category. Thank you very much :)

Hi Julien. You need to add some kind of “strength” parameter that controls the strength of the adjustment. Without this parameter, the image will not change. (Basically, the function uses the difference between the current radius of each pixel – relative to the image’s center – and a new radius, generated by the “strength” parameter, to know how much correction to apply.)

As it is, you have an implicit value of strength = 1, which results in no meaningful change to the image.

So change this:

`correction_angle = (sqrt(((largeur_image)**2) + ((hauteur_image)**2)))`

…to something like this:

`correction_angle = (sqrt(((largeur_image)**2) + ((hauteur_image)**2)) / (strength_goes_here)`

hi, Could share with me the paython code? please

Thanks for the pseudocode! I was going to write a more general program, but I was in a hurry and this is good enough! I implemented it in C++, using ImageMagick (Magick++) and accelerated it a bit with OpenMP. Here’s the source:

http://nrstickley.com/uploads/unwarp.cpp

It’s not particularly pretty, but it’s good enough for my own use. It’s a place to start. My image processing pipeline assumes that the image will later be cropped, so I didn’t bother fixing the edge problem.

Thanks for the algorithm! I created C# version . It was useful for time lapse application I am developing.

Hello . Can you send me your c# version please; if you can i will be glad and i will to pray for you.

hello Dear Srecko ristic

i tried to implement this algorithm in C# but i got few issues

can you help me? if possible please share your code

Thanks

my mail is kimbop2@gmail.com

THanks

Hi Tanner

very thanks for your algorithm

I implement it with vc++ and the result was very good.

in your algorithm you give the destination Image Points position to algorithm and Catch the source image points position from it and then copy the color of source image point to destination image point.

my problem is this:

I need to compute correct position of some spacial pixels in source image so can you send and algorithm for me that give the source image point to it and it determined the final position of it in the destination Image ?

Hi Tanner,

what is the significance of ‘r’ here.

Hi Colin. R is the distance between each pixel and the center of the image. A pixel with a “high” r value receives more correction than a pixel with a “low” r value.

Thank you Sir

Sir, the actual source code page is not accessible. It it possible to provide any other link for the same. Thanks in advance.

Hi Colin. I’ve updated the article to point at the correct page. Here is a direct link; the master function starts around line 200.

Sorry for the trouble!

set correctionRadius = squareroot(imageWidth ^ 2 + imageHeight ^ 2) / strength. this line would be lead to divided by zero exception if the strength is set to zero. make it clear how to read the pixel left to right or top to bottom.

I have used above algorithm, and set strength=0.0 and zoom =1.0, would not be produce original image, this what you had said. my image was totally collapsed.

Hi Rajesh. The above pseudocode deliberately omits error-checking, but in case it’s helpful, I’ve added an easy check for strength = 0 to avoid DBZ. Obviously, just set strength to some arbitrarily small value and you’ll be fine.

The order in which you read pixels is totally irrelevant.

If your implementation does not reproduce the original image with strength = 0 and zoom = 1, you have probably coded something incorrectly. Feel free to share your code if you’d like more detailed feedback.

Sir could you explain what does the if loop do in your algorithm.

if r = 0 then

set theta = 1

else

set theta = arctangent(r) / r

and what is the mathematical significance of theta?

if the image is a matrix of m*n, what matrix operations should i do to dewarp the image?

Hi Aston. Without that If statement, you’ll cause a divide-by-zero error when calculating theta if r = 0 (as it may be in the exact center of the image). When this happens, we set theta to 1 so that the original pixel(s) in that location are copied as-is.

I’m not sure what you mean by “mathematical significance” of theta. Theta increases according to two conditions: a pixel’s distance from the center of the image, and it’s angle relative to the horizontal and vertical axes. Pixels along the diagonal are corrected most, while pixels on the horizontal/vertical axis (where the image’s center is treated as (0, 0)) are corrected least. This is what causes the corners of the image to “pull outward”, causing the correction.

Thank you. It is very helpful to me.

But, Can I know The inverse function of this equation?

I want to make a distortion image.

Hi Jin. Here is a link to one version of an “apply lens distortion” algorithm:

https://github.com/tannerhelland/PhotoDemon/blob/master/Forms/Effects_Distort_ApplyLens.frm

The function begins on line 187.

Hi Tanner.Your code is very helpful,but I’m facing a small problem.I’m applying lens correction on a .RAW extension file.After the correction,image appears to be distorted.There are some circular patterns on the image.

Is it wrong if apply lens correction on a bayer image rather than rgb color image??

To remove the circular patterns,I applied bilinear interpolation,but the resulting image shows hardly any improvements.

Thanks and Regards

Hi Krishna. When correcting distortion, the most important thing is to make sure you apply identical correction math to each pixel unit

as a whole. Bayer filters have different color layouts (RGGB, GRGB, etc) so you need to modify the code to make sure that each “whole pixel unit” is corrected identically.Otherwise, you risk the R value being corrected to a different location as the G values or B value, which could cause the colors to separate, creating the circular patterns you describe. Hope that helps!

Hi Tanner.

Thank you so much for the help!!

The code is working perfectly :-)

Hi Tanner. I have one more doubt. A silly question. Is it necessary that the image has to be square for better results??

Thank you so much in advance.

Good to hear from you again. :) Congratulations on getting your function working!

With this algorithm, the image doesn’t need to be a square. X and Y are calculated independently against their maximum possible value (“halfWidth” and “halfHeight” in the pseudocode), so the distortion correction will automatically scale to match the X and Y dimensions of the image.

If the image has circular distortion, but the image shape is *not* a square, you can set halfWidth and halfHeight to the same value to enforce circular correction. Maybe something like this:

`If (halfWidth < halfHeight) Then halfHeight = halfWidth Else halfWidth = halfHeight`

That would take the minimum of width vs height and use it to calculate both X and Y, giving you a circular correction regardless of image shape.

Hi Tanner

The circular distortion has been completed removed irrespective of the image shape.But,the issue is there are some slightly curved lines especially towards the right bottom part of the image even after executing the lens correction. I tried taking the image as square to make it proper but it didn’t help. Any suggestions??

Thank you so much in advance :)

*completely removed

Dear Tanny

Hi, I want to write this code in matlab. can you help me?

Great…Thanks for pseudocode… It worked well just a bit blur image.

Hello ! Thanks for your pseudocode, it helped a lot !

I noticed that the higher strenght is, the image is more and more cropped. I would like to know if you have a way to know the size of the new image as a function of the strenght and the size of the original image.

Also, could you give some details on the bilateral interpolation you’re using, or propose another interpolation to cancel the blurring effect ?

Hi Armand. Bilinear Interpolation is a very straightforward algorithm. Wikipedia has all the information you need.

For even better results (at some trade-off to speed), you could also try Bicubic Interpolation. Automated implementations of these functions are provided by most imaging libraries.

Armand…. Bicubic interpolation produced better results than bilateral for me…. The circular edges appeared almost perfectly when compared to bilateral…

Hi Tanner. What should I change in this algorithm to make fisheye effect?

From where have you started taking index values x,y?

Is it from the middle of the image or from the left top?

The mathematics/modelling behind the solution presented here is discussed in paper of ISHII C., SUDO Y., HASHIMOTO H.: ‘An image conversion

algorithm from fish eye image to perspective image for

human eyes’. Proc. IEEE/ASME Int. Conf. Advanced

Intelligent Mechatronics, Port Island, Kobe, Japan, July

2003, vol. 2, pp. 1009–1014

Also in, Hughes, C., Glavin, M., Jones, E., and Denny, P. (2009).

Wide-angle camera technology for automotive applications: a review. IET Intelligent

Transport Systems, 3(1):19–31

Hi Tanner! Is this algorithm fast enough to work with HD video (1280×720) in a real time? Also, I use 640×480 resolution of the Logitech camera c270 which has bigger resolution originally and it looks like for 640×480 they used upper left (?) corner of the matrix. So, I suppose that distortion is not symmetric in this case. Will this algorithm work with it good or I need to add some correction in it? Thanks a lot!

Hi Mike. Yes, I actually designed this algorithm explicitly for real-time usage. If the lens is fixed, you can precalculate all (x, y) positions for the transformation and store them in a table. Then this becomes an O(n) operation (which is easily parallelized, no less!) which is the best you can hope for in real-time processing.

If the distortion is not symmetric, you’ll need to change the center point used by the correction equation. Specifically, the lines:

`set newX = x - halfWidth`

set newY = y - halfHeight

…need to change. (For a center of (0, 0), you should be able to simply remove the “- halfWidth/Height” portions.)

Thank you for the algorithm, I coded Purebasic

it works well :)

however, the distortion is X AND Y

Now, I am able to change the value in the X and / or Y independently

as do I take?

currently my code is:

For x=1 to image_file\Largeur_image-1

For y=1 to image_file\hauteur_image-1

newX.d = x – halfWidth.d

newY.d = y – halfHeight.d

distance.d = (pow(newX.d ,2) +pow( newY.d , 2))

r.d = (distance.d / correctionRadius.d)

if r.d 0

theta.d= ATan(r.d )/r.d

sourceX = halfWidth.d + (theta.d * newX.d * image_file\pos_curs_zoom)

sourceY = halfHeight.d + ((theta.d ) * newY.d * image_file\pos_curs_zoom)

if (sourceX>1 and sourceY>1) and (sourceX<image_file\largeur_image and sourceY<image_file\hauteur_image)

plot (x,y,Image_or(sourceX,sourceY))

Endif

Endif

Next y

SetGadgetState(#progress_calcul, x)

; UpdateWindow_(GadgetID(#progress_calcul))

WaitWindowEvent(2)

Next x

ok, I actually found myself the solution, thank you again :)

Hi Tanner,

Is there a thumb rule to decide the value of strength which we are going to use depending upon the lens used to photograph the object. For example I used a Nikon D750 camera with 28mm and 55mm lens to photograph the same object. What value of strength should I use for the two shots.

Hi Prateek,

I’m afraid I don’t know of a model that can “predict” correction values based solely on the lens, but there have been attempts to create open-source lens databases that store this information.

The Hugin guys have a nice article on that here: http://hugin.sourceforge.net/docs/manual/Lens_correction_model.html

Absolutely outstanding, thank you for this. Question: I have a situation where barrel distorted images are already getting processed to find the location of a blob and then I’m getting that distorted value. How could I take this position and run it through a variation of your solution to then get the “correct” location? Thanks for any help on this.

Hi Bob. Thank you for the kind words.

This algorithm wasn’t designed for barrel-distortion, specifically (and in fact, there are other, better algorithms for that). But as a general rule, any algorithm can be modified to operate on a specific location.

In the case of the algorithm above, you’ll notice that x and y are remapped around the center of the image (using the “halfWidth” and “halfHeight” values). Simply point those at some other “center” point, and the algorithm will automatically base its calculations off that, instead.

You can also limit the radius used to prevent correction beyond a certain distance. (This is nice for dealing with certain types of lens aberrations, where only a portion of the lens is affected, for example.)

Anyway, I don’t know if any of this is useful, but maybe it’s a start. This article has produced a lot of questions about correcting barrel distortion, specifically, so maybe it’s time for me to write a “Part 2″… :)

Thank you. Very helpful!

Can you give a MatLab implementation of the code?

Great! I tried it in MatLab. Works fine. Thank You.

clc

clear all

close all

I=imread(‘xxxx.JPG’);

J=rgb2gray(I);

[h w]=size(J);

hh=h/2;

hw=w/2;

strength=2;

r1= (h^2+ w^2)^(1/2)/strength;

K=uint8(zeros(h,w,3));

for ii = 1:h

for jj = 1:w

newx= ii-hw;

newy=jj-hh;

dist= (newx^2+ newy^2)^(1/2);

r= dist/r1;

if (r==0)

theta=1;

else

theta = atan(r)/r;

end

sourcex=round( hw+ theta *newx);

sourcey= round(hh +theta *newy);

K(ii,jj,1)= I(sourcex,sourcey,1);

K(ii,jj,2)= I(sourcex,sourcey,2);

K(ii,jj,3)= I(sourcex,sourcey,3);

end

end

figure;

imshow(K);

Hello viprav,

i tried implementing your code and it worked well. Could you please explain me how is theta actually calculated.

This algorithm is very good, but I don’t understand the principle of the algorithm, do you have a document to explain the algorithm? thanks

Hello I tried this on VB.NET (based on pseudocode) but somewhere its not working the best. I am getting layers of curves(arc kind of thing) only on left side of image . Can you help me? or can you give the VB.NET implementation? Thanks in advance.

Hello,

Can you please help me out with the implementation the the above matlab code in matlab, as when i am implementing it i am getting a zoomed image and while reducing the strength value to 1.5 or 1.75. I am not able to detect any difference between the original image and corrected image.

Thanks a lot for you pseudocode! It is very amazing!

But I have a little problem with understanding ” theta = atan(r)/r ” , I can’t understand the mathematical meaning behind this. Could you explain it to me?

Thanks for your sharing again!