Results 1 to 10 of 60

Thread: [Award Winner] Bitmapped Images - The technical side of things explained.

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Administrator Redrobes's Avatar
    Join Date
    Dec 2007
    Location
    England
    Posts
    7,201
    Blog Entries
    8

    Post Dithering

    Right, we have covered color and resolution now so we can finally start doing something with them. We can exchange color for resolution or resolution for color. If we up the resolution and reduce the number of colors then this is called dithering and is a very common process. In fact when you print out an image then it will be done for you as part of the printer driver even if you dont ask for it. Dithering can be applied to all images but generally paint apps do not perform it on color index type. For the other type, the dithering is done per channel so we might as well stick to one channel and use grayscale images to show the effects.

    There are many types of dithering but the most common is halftoning which is what is also known as an ordered dither. Each pixel of a large number of gray shades are converted to less shades (usually black and white only) by substituting them for a small grid of new pixels where the average of the new grid is the same shade as the original pixel. What is vital to understand is that if you dither an image then you must up the resolution of it by an amount that the dither grid size if you want to preserve the image quality. The dither grid size should depend on the amount of shades being converted. So if we were to go from full grayscale to black and white that is a 256 to 2 color drop. So using a 16x16 grid should be the minimum to allow for the full averaging. I doubt any paint app would use that and its more likely to use a 3x3 or 4x4 grid instead. Therefore, expect some loss and at least multiply the resolution by 4 in each direction.

    A second but slightly less common form is the error difusion. When converting the PC keeps track of what color the dither would average to and compares with next pixel and applies the dither pixel a color which gets the dither color average as close to the sample image pixel. If that was in black and white only then if the dither average is too bright then it puts a black dot down, if too dark then a white one. Over space, the error wibbles up and down but averages out to track the sample image. If the sample image is made up of thin lines then this technique does not work so well as it confuses it but if the resolutuion is multiplied by 4 in each direction, generally, error diffusion is better than halftone.

    An image shown next is the color one dithered into the 16 color windows palette. Not that bad considering the bizarre set of colors contained in that palette.

    Now that you can appreciate what is happening behind the scenes you will also appreciate printer DPI settings. A standard color desktop printer might have a DPI rating of 1440 but it only has 4 or maybe 6 ink colors. A color photo has to be dithered in CMYK down to 4 inks before being sprayed. The 1440 rating is the number of dots per inch per component. So it is not how much DPI you can print a full color photo at. A modern color ink jet printer uses error diffusion (sometimes called giclee for some bizarre and cost increasing marketing hype reason) so that you should expect that to print a full color pixel you would need to divide this number by something between 4 and 16 to get the real 'true color' DPI rating of the final print. So a 1440 DPI printer can print photos at about 200 pixels per inch which is still quite good as can be seen from out previous discussion about resolution. Also, as a tip, dont try to pre-dither images before sending to the printer - let the printer do it.

    We stated earlier that we can go back the other way too. We can trade resolution for more colors. First, up the image number of colors to full color or grayscale and then average sections of the image out. For example assume that we can see that a halftone image was processed with a 4x4 grid. Then average those grids out. We can do the same with error diffused dither by using a blur. By looking at dithered images from farther than the ability to resolve the dot pattern, your eye is just doing the averaging for you.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Dither_Ordered.png 
Views:	334 
Size:	84.3 KB 
ID:	5505   Click image for larger version. 

Name:	Dither_Error.png 
Views:	203 
Size:	122.0 KB 
ID:	5506   Click image for larger version. 

Name:	Dither_Error_Win.png 
Views:	243 
Size:	95.9 KB 
ID:	5507   Click image for larger version. 

Name:	Dither_Recover.png 
Views:	208 
Size:	31.4 KB 
ID:	5508  
    Last edited by Redrobes; 02-16-2023 at 07:12 AM.

  2. #2
    Administrator Redrobes's Avatar
    Join Date
    Dec 2007
    Location
    England
    Posts
    7,201
    Blog Entries
    8

    Post Undersampling

    Your sat in a tour bus in the Serengetti looking out over the vista to a herd of Zebra, get your digital camera out, click and capture that image, driving away you capture some more of the same zebra. The original is effectively infinite in detail but your camera will capture a number of pixels depending on the camera resolution, encode them (probably compress them too) and store them on the camera flash card. From that piont on you cannot get any detail from that image with a finer resolution than that of the cameras imaging sensor. The camera sensor has 'sampled' the infinite detailed image at regular intervals and collected only those samples in the camera.

    Later on you look at the zebra and note that in the middle picture the stripes on his front leg are reversed to the first picture and then later, wham !, what the hell happened with that third image ?

    What your looking at are the effects of undersampling and a process called 'aliasing'. The same effect is also called Moire when referred to 'fringes' that appear in closely spaced lines at a changing angle. An alias is a second instance of something usually a second name of a gunslinger like "Alias Smith and Jones" (http://www.imdb.com/title/tt0066625/). In graphics they refer to secondary instances of stuff which is not supposed to be there - like the new zebra stripes. Also, once you have them it is extremely difficult to recover the image so that it shows what the stripes should have been like.

    The precise nature of what is happening to your zebra leg is quite complicated and is wrapped up in a lot of math which we must lightly dip our toes into in order to fix the problem. Suffice it to say that in this instance what happened was that the next sample skipped over a stripe and went into the next one after. Its the same as the wheels going backwards on old cowboy movies of the wagons rolling where the film camera sample rate of 24 frames per second is just below that of the wheel spokes moving around. The movie camera frame rate is undersampling the action.

    This is the main reason why holding original photos and art at high resolution is essential. If your going to work on a final piece of art at 1000x1000 pixels then you should work in much more than that - say 4000x4000 and only convert to 1000x1000 at the final stages. If for any reason there are details in the original that are too fine to produce at 1000x1000 then they will not only be lost but could cause weird effects to be produced at the final re-sample.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Zebra.png 
Views:	666 
Size:	523.9 KB 
ID:	5509  
    Last edited by Redrobes; 07-27-2008 at 11:22 AM.

  3. #3
    Administrator Redrobes's Avatar
    Join Date
    Dec 2007
    Location
    England
    Posts
    7,201
    Blog Entries
    8

    Post Antialiasing

    Ok so we have an aliasing issue, what we need is an aliasing busting technique - cue antialiasing.

    Antialiasing is all about doing stuff at a higher sample rate than the final, filtering and then resampling to the final rate. The idea is that the effects that would have been present from the undersample will be beyond the filter and therefore not in the final image. So were back to our zebra. If the camera had filtered off any higher frequencies than the pixel sample rate then as the stripes on the leg of the zebra got thinner then eventually they would blur out and become a solid mid gray color. And then as it got farther away still the whole animal would become solid gray but would at least look better than the new stripes from the earlier example.

    So, always work at a higher resolution than the final. Make a copy of the high resolution image and blur it just a little so that a few pixels blur together and then resample it smaller to final size. If going 2 to 1 then blur just enough for two pixels to look like a single blob and then make half the size, if going 10 to 1 then blur so that 10 pixels become one blob and then make one tenth the size.

    This form of antialising is known as "super sampling" it is the type also known as "Full Screen Antialiasing" (FSAA) when setting up video cards for games.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Zebra_AA.png 
Views:	409 
Size:	474.9 KB 
ID:	5510  
    Last edited by Redrobes; 07-27-2008 at 11:43 AM.

  4. #4
    Administrator Redrobes's Avatar
    Join Date
    Dec 2007
    Location
    England
    Posts
    7,201
    Blog Entries
    8

    Post Resampling

    This means changing the image resolution or the number of pixels. The idea is to preserve the original image in the best possible way. The techniques involved are split depending on whether going up or down.

    Down means taking a big image and making it smaller. Your going to lose information and what we would like to do is lose the least amount. Although all paint packages have the ability to do it a few key presses and one pass they all seem to be universally useless at it. If your going from 1000 to 800 then the best way is to upsample from 1000 to a large multiple of 800 like 3200, or 4000 and then blur and downsample to a 1/4 or 1/5. If changing in much larger ratios like 1000 to 173 or something like that then find a nice multiple of 173 larger than 1000 - i.e. 1730 and upsample to that first then blur 10 pixels into one and then down sample by 1/10th. Many people say - oh always use Bicubic or always use Lanczos but I disagree and I will show the results here with bicubic from 1000 down to 173. Maybe you disagree. Mathematically a sinc resample should be the best possible but I dont think the paint apps implement the full sinc and the windowing makes the resample less effective as more scaling is applied.

    Up means taking a small image and making it bigger and here the paint apps seem to do it as well as can be expected so just picking the right algorithm is all thats needed. Again here, people often say that Bicubic or Lanczos is the best and for most images I would agree but there are exceptions. For general work including maps and photos I think its true and below is a sample sheet.

    Where the situation changes is with noise and ringing and with small scales. If you are resampling from 997 to 1000 or something very very close to what you want then I would use a pixel/point/nearest neighbor based resample because for about 99% of that image the pixels will not change. Below is a set of lines resampled up very slightly.

    You will have to save and zoom up this last image to see whats happening. My monitor makes all three pretty much gray.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Resample_Down.png 
Views:	150 
Size:	155.6 KB 
ID:	5511   Click image for larger version. 

Name:	Resample_Up.png 
Views:	183 
Size:	267.8 KB 
ID:	5512   Click image for larger version. 

Name:	Resample_Close.png 
Views:	124 
Size:	2.8 KB 
ID:	5513  
    Last edited by Redrobes; 07-27-2008 at 11:28 AM.

  5. #5
    Administrator Redrobes's Avatar
    Join Date
    Dec 2007
    Location
    England
    Posts
    7,201
    Blog Entries
    8

    Post Resampling and its effect on color

    We can clearly see in many cases of resampling that some pixels are made from the average of several others. This is particularly true in upsample using cubic weighting. We have also said earlier that averaging two pixels in any color space can cause some odd results so this section is some gotchas to look out for and what to do about them. This is an area that I am least familiar with since I only use RGB and dont bother trying to fix these but its worth knowing anyway.

    We noted that a blend from red to blue in RGB color space would give deep mauve - ok but look at this. This is an up sample of some colors which have some issues. Some of the corners here are pure green but the other is red & blue (magenta). When we average these two colors together the red & blue from one image drop halfway and the green from the other drops halfway. End result - all halfway to give mid gray ! Now its worth noting that for resample types that dont allow blends like the nearest neighbor then you dont get the issue. Also, it turns out that if you apply a gamma adjustment to the image, resample with a cubic and the apply and reverse gamma adjustment then the problem gets fixed. I havent fully convinced myself why this is true but I am assured it is.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Resample_Color.png 
Views:	164 
Size:	13.8 KB 
ID:	5514  
    Last edited by Redrobes; 07-27-2008 at 10:57 AM.

  6. #6
    Administrator Redrobes's Avatar
    Join Date
    Dec 2007
    Location
    England
    Posts
    7,201
    Blog Entries
    8

    Post Upsampling curvy black and white stuff

    Reserved space - but this one came up recently too.
    EDIT -- actually this comes up a lot and again today so here we go again... how to resample up just B&W stuff.

    First, upsample anyway you like in factors of double (200%) in stages until one more is less than double (say 1.7).
    For each stage double the size of the image which makes it pixellated - a nearest neighbor / pixel resize is fine.
    Next, blur it - preferably using gaussian blur. The amount of blur radius can be experimented with but about a factor of 3 pixels or so.
    Then use contrast to clamp it back to being B&W again. Actually I use about 95% or so not 100% but its up to you. Dont brighten it or darken it when you do it, just up the contrast.
    Keep doing this in stages until the last stage is <200% in which case you might want slightly less blur than usual but not by much.
    Here is the results. Everything is cool except for where there is an acute internal angle where it tends to start filling in depending on the amount of blur used. So less blur helps, but more blur is better to get rid of pixellation. You have to experiment with it.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	BWResample.png 
Views:	122 
Size:	9.0 KB 
ID:	6738  
    Last edited by Redrobes; 09-29-2008 at 05:52 PM.

  7. #7
    Community Leader jfrazierjr's Avatar
    Join Date
    Oct 2007
    Location
    Apex, NC USA
    Posts
    3,057

    Post

    *Head 'sploid*


    Thanks for spending ALL the time to explain all this. Unfortunatly, my head just can't comprehend 90% of it, so I stopped reading at half of the first article. Sooooo I gave you a rating and rep, cause even if it won't help me, I am 100% sure someone will read it and get some help from the work.

    Joe
    My Finished Maps
    Works in Progress(or abandoned tests)
    My Tutorials:
    Explanation of Layer Masks in GIMP
    How to create ISO Mountains in GIMP/PS using the Smudge tool
    ----------------------------------------------------------
    Unless otherwise stated by me in the post, all work is licensed under a Creative Commons Attribution-Noncommercial 3.0 United States License.

  8. #8

    Post

    This turorial is Great, and answers many questions I have had wrt imaging terms. Thank you for posting it.

    While I am still something of a dummy (at least a newbie) when it comes to imaging software in general, I am less of a dummy when it comes to electronics.

    Quote Originally Posted by Redrobes View Post
    it turns out that if you apply a gamma adjustment to the image, resample with a cubic and the apply and reverse gamma adjustment then the problem gets fixed. I havent fully convinced myself why this is true but I am assured it is.
    Since I was also curious about gamma correction, I looked up gamma correction in search engine and found following document:
    http://www.cgsd.com/papers/gamma_intro.html


    Based on this, I think that gamma correction does nothing to the image data itself, it is simply a setting for when image is displayed on monitor. So if you first halve the original value, and later double it for a separate image, the new image ends up with the original value, and at no point is the image data altered by changing the gamma setting.

    I'm not sure if this helps to understand the issue or not. I would need to be a bit more familiar with sampling methods.

    Edit: Warning! The attempt at insight contained in this post, is most likely either dead wrong or completely irrelevant to this topic. Every line after I thank Red Robes for posting this tutorial, should probably be disregarded.
    Last edited by yu gnomi; 07-29-2008 at 04:16 AM.

  9. #9
    Administrator Redrobes's Avatar
    Join Date
    Dec 2007
    Location
    England
    Posts
    7,201
    Blog Entries
    8

    Post

    Quote Originally Posted by yu gnomi View Post
    I think that gamma correction does nothing to the image data itself, it is simply a setting for when image is displayed on monitor.
    You can apply color correction and gamma adjustment globally at the video card stage and this would affect everything and not affect the pixel values but to do that color fix up with resampling, you have to apply a gamma function to the pixels first and then apply the stretch and then apply the inverse gamma function. Although in the example provided the first gamma function does nothing because all of the colors start saturated if you tried to do it to a photo then you need that first gamma adjustment if your going to apply an inverse one later. Your right in that gamma adjustments are used to compensate for the effects of monitors - esp CRT type. Why this process works to fix this problem I am not sure. I am not sure if a different compensation curve would also work.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •