View Full Version : image processing basics

04-13-2012, 05:59 PM
first post in this forum, i'm new here and to 2d coding, though i have a fair background in 1d dsp, which of course lends itself to 2d.

audio has a nice repository of algorithms at musicdsp.org, many of which work straight out of the box. some things like interpolation can be applied directly to images. i was somewhat surprised to not find a similar site for graphics.. though of course a lot of 2d includes 3d and then things become extremely involved... so i asked for this forum to be created, and expect i'll post in it as my knowledge progresses.

so itfp, thanks to the admin here and to waldronate, who is the first coder i've interacted with here, for steering me in the right direction.

one of the topics i've researched in audio is algorithmic/aleatoric/stochastic/okay, "random" generation, so my interests carry over with a focus on what is possible procedurally instead of by direct specification.

i'll post by topic:

04-13-2012, 06:15 PM
i expect that most often the first step for map generation is height field generation, commonly using perlin noise.

i prefer to roll my own rand() function using hal chamberlin's algorithm from 'musical applications of microprocessors' - i use unsigned INTs and perform math using INTs for speed (remember i know next to nothing about image coding, using GDIs et c., so this may be unwise) -

nrnd = 196314165 * nrnd + 907633515;

if you scale that sensibly and use it to generate white noise at audio rate, there is no discernible repetition. you can use 16 bit INTs and hear repetition every 2^16 samples, so SHORTs are useful for finite applications.

here's chamberlin's chart:
word len A B
8 77 55
12 1485 865
16 13709 13849
24 732573 3545443
32 196314165 907633515

perlin noise isn't the only method of generating height fields, eg. obviously each pixel could be randomly generated then the image could be blurred/smoothed. it is convenient and efficient.. interpolation is used to smooth random points at different scales (typically at 'octaves' or ^2) and then summed at amplitude 1/(n^2). the idea is also that the same data set (256*256 for unsigned CHARs?) can be used for each octave because the scale at any perspective obscures the similarity.

ken perlin's lecture and webpage on the topic:

there are several great illustrations of the topic online, easy to find, eg.

so i'd be a ponce to repeat them all here. having generated noise by several methods in audio i can think of no method that lends itself better to the solution.

04-13-2012, 06:23 PM
slope mapping:

i haven't implemented this yet, only verified that it is effective:

take the four adjacent pixels, to each side and vertically, and take the differences (or 'delta' as i believe mathematicians like to call it), eg.

dX = n[x+1][y] - n[x-1][y];
dY = n[x][y+1] - n[x][y-1];

arctan2 (whatever that is..) the two arguments:
atan(dY, dX);

..and you'll get something with a sign and up to +/- 2*pi which can conveniently be applied to a sin() function eg. for directionally lighting/shadowing each pixel, and adding an appropriately scaled constant allows modification of the direction.

04-13-2012, 06:35 PM

so far, the largest single repository for image processing algorithms i've found is the FAQ for a mailing list:

whether this is form there or elsewhere (so far i've noted a few but used none) -

corners = ( Noise(x-1, y-1)+Noise(x+1, y-1)+Noise(x-1, y+1)+Noise(x+1, y+1) ) / 16
sides = ( Noise(x-1, y) +Noise(x+1, y) +Noise(x, y-1) +Noise(x, y+1) ) / 8
center = Noise(x, y) / 4

in audio these would be called FIRs or finite impulse response filters, as opposed to IIRs, which i haven't seen much reference to for images yet. the few other filters i've seen for blurring, smoothing, sharpening and edge detection are all based on the adjacent filters, and i suppose performed repeatedly. i'll stop posting now and give myself a chance to code more :)

04-13-2012, 07:41 PM
The reason that you don't see an IIR in image procesing is because there is no time component. If you have an array of images over time, it's called video and now you need to do fancy 3D image processing and/or pixel areas over time (that latter bit is exactly analogous to a 1-D process like audio processing).

04-13-2012, 09:29 PM
I have a tut and talk about some of the common / basic image algorithms in the post:

also, arctan2 is a function in programming languages which implements arctan except that arctan would be required to be given values of infinity to get certain results from it so arctan2 function is coded to give explicit results for values where it would need to have given an infinity to get it. Arctan is usually used to get an angle from a gradient so when the value of dy -> 0 for dx/dy then the function returns PI/2 or -PI/2 as appropriate. It also handles the angles > +/- 90 deg properly. So arctan2 is a doddle to use else you need to write a few extra lines of code to trap the infinities if you use plain arctan.

oh and arctan2 should give +/- pi not 2pi. From -pi to +pi is one full rotation.

And if your after lots of formulas like the one you quoted for the random number gen then the wiki page of them is a good source:

04-14-2012, 01:39 PM
The reason that you don't see an IIR in image procesing is because there is no time component. If you have an array of images over time, it's called video and now you need to do fancy 3D image processing and/or pixel areas over time (that latter bit is exactly analogous to a 1-D process like audio processing).

IIRs can be applied to any set of periodically sampled data. it may not be conventional to do so, and there may be a differing vernacular, an IIR could certainly be applied to rows and/or columns of pixels :)

as they are not computationally intensive, eg. a highpass filter with a steep coefficient could be run once in both directions over every row and column with dramatic results and achieve a radius/"wavelength" effect that would require hundreds of passes with grid filters :) as said, i'm new at this so i expect i have plenty to learn.

one of the common foundations for audio processing is dspguide.com - downloadable in pdf chapters. it's general theory and has accessible explanations of FFT and other common dsp methods suitable for 2d procedure as much as 3d procedure. might be a good read for some aspirants :)

off course, now we are seeing that my secret agenda in soliciting for this forum was to mine goodies for myself :D thanks redrobes and waldronate :)

i'll add a thingy on IIRs for images..

04-14-2012, 03:09 PM
this filtering was performed using the well-known formulas from robert bristow-johnson's biquad cookbook.. code's right in there (IIRC the signs for the bandpass y coefficients need to be flipped..)

actually using them will probably be aggravating unless you have a background in audio processing or IIR filtering or otherwise are oriented with the fourier theorem :) generally these filters (the state variable is another trivial algorithm) have a phase effect as well as a frequency dependent attenuation (dspguide.com mentioned above provides a thorough background here) but not always.

what they would allow you to do to images is apply a wide effect at low computation. the actual process (after the coefficients are calculated) uses a few buffer variables to record the last two states of the input (x[n], x[n-1], x[n-2]) and output (y[n], y[n-1], y[n-2]) and a scalar for each.. so the computation would look like this:

y[n] = a0*x[n] + a1*x[n-1] + a2*x[n-2] + b1*y[n-1] + b2*y[n-2];
y[n-2] = y[n-1]; y[n-1] = y[n];

this can often be reduced by a few multiplies as often a0 and a2 are the same coefficient or similar.

in this image, the first frame compares the original signal (a bandlimited triangle wave.. similar to terrain lol) to the 2nd order highpass filter from the cookbook.. the phase distortion is endemic to the filter, it is relative to the wavelength in one direction or the other.. and of course audio/1d filtering is an *extensive* field.. zero-phase filters exist, SINC filters (dspguide) would probably do fascinating things with images..

in the 2nd frame, i recorded the 'audio' and reversed it and filtered it again.. this would be synonymous to running the biquad in one direction along a row of pixels, then running it in the other direction.

in the third frame, i've scaled the output to resemble the input. i'm not particularly anxious to prove any point about the utility of IIR filters, only demonstrating them for those who wish to explore them :)


04-14-2012, 03:19 PM
How about this: An IIR on a fixed-size data set such as an image can be implemented as an FIR with a filter width the size of the image, if I recollect correctly. The classic use of IIRs to provide feedback into a signal aren't nearly as useful for the most part in image processing, where there is a distinctly limited data size. There are some examples of low-pass filtering out there (often under the term recursive filters rather than IIR), but it's not a particularly common usage, in my experience.

04-14-2012, 04:05 PM
I have a tut and talk about some of the common / basic image algorithms in the post:

now that i've had time to read it :) familiar territory for me, with the exception of the upsampling work. which is astounding.

you may me interested to know that the audio field has a different take on resampling. itfp, i would expect that audio upsampling congruent to the work in this thread is extremely pricey. audio people are an extremely precious bunch as i expect you're aware. i am nescient in regards to anything expensive, so all the upsamplers i've encountered for audio are very primitive.

itsp, downsampling and bitcrushers are par for the course in audio - most modern genres would be half of what they are without it, it's used very extensively. quite honestly, some of the "creative" upsampling in that thread would probably be lucrative. if you've got something that no one else has, you can easily set your own price and have a significant customer base, if interested in such things. i expect audio parallels if not dramatically exceeds auto and harley davidson sales in terms of translating testosterone into capital. it's quite horrifying really.


04-14-2012, 04:15 PM
waldronate - i agree, it's not esoteric and there would already be significant application already if it were strongly advantaged. i can't help but consider the cases :) the techniques are applicable even with a small size.. for instance, if i needed to write code to dramatically blur an image today, i'd certainly try a first order lowpass/ "leaky integrator" in each direction before running a 3x3 grid filter for several dozen passes. but that's only because of my erudition.

i'm currently rebuilding my app, perhaps i'll give it a test run to blur a height field :)

04-14-2012, 09:28 PM is a good starting point. A lot of the GDC discussions are useful for application to real-time graphics systems today; I would recommend SIGGRAPH papers for more general research. http://kesen.realtimerendering.com/ is an excellent resource for locating many papers in the computer graphics field.

One of the fun things about graphics systems these days is that you're often looking at hundreds to thousands of processors and relatively inexpensive systems (way less than $5k) with a fair amount more than 10 teraflops of single-precision floating-point power. When you start looking at GPUs, it's often easier and sometimes faster to do all of your work directly in floating-point than try to keep everything in fixed-point or integer notations. Even for some CPU implementations these days you may find that floating-point will implement certain algorithms faster than integer.

Modern PC systems are hopelessly memory-bound and the reason that GPUs can be faster than CPUs is often that GPUs have more memory bandwidth (100s of GB/s) compared to PCs (10s of GB/s). Using half-floats can ease the pressure a little, but you have to live with the precision loss. But I would much rather code floating-point algorithms than fixed-point ones because I have fewer things to keep track of.

I'm lazy enough that I'd probably code a blur as an FFT on the image and blur kernel, multiply the two images, and then inverse FFT on the result image. That way I can get an arbitrary blur kernel for the same cost as a symmetric one (and a sharpen pretty much comes down to a divide instead of multiply).

04-15-2012, 03:19 PM is a good starting point.

i am unable to rep you at present :p :)

this does demonstrate IIR lowpass (apparently a 'gaussian blur' technique) and resonant biquad filtering. i can't see any application for the latter beyond a costly and somewhat arbitrary video transform, but you never know. peanut butter and chocolate thing.

tackling mr. petzold (for whom i have no words) currently

04-21-2012, 01:05 PM

..so i've got my basic height field editor application, with a few filters and brushes. i'm using SetPixel in WM_PAINT to redraw the window which is archaic.

is DirectDraw the choice for upping the performance of my app to a more tolerable level while maintaining compatability?

another question i wanted to ask is, being new to this community per se, are there any holes that need to be filled in terms of applications? i'm expecting WILBUR covers all the bases for 'informed' fantasy map generation..

04-21-2012, 06:38 PM
Dont use SetPixel or even SetPixelV as they are amazingly slow. You could use Direct draw but that might be a pain and if your still going to use that with calls to set pixel APIs then it will be even slower than GDI.

The best bet for you right now with Windows only is to use the CreateDIBSection function and look into the parameter that is the ppvbits. That gets you a memory pointer to the bitmap bits so you can update them without going through the hassle of SetPixel. I.e. its much much faster. There are faster methods but not very likely faster and easier.

Once you have a bitmap modified to your users liking then during the WM_PAINT you can blit the bitmap direct to the paint DC with one call of a StretchDIBits and not faff about with per pixel calls. It will easily be more than 100x faster.

From a compatibility point of view OpenGL is more cross platform than Direct Draw. If your interested in 3D height terrain viewing then you can look at my free 3D terrain viewer.
Just put a "height.bmp" file into the same folder and run it up. The height bitmap is greyscale. If you tried my instant islands then it exports that same file so put the two together. Run II then export the terrain you like then run DF.

Oh and on the blur I am with Waldronate, I would use an FFT or a DFT and then mult in a gaussian kernel and then go back using another FFT/DFT in reverse. If you were blurring just a couple of pixels blur then a convolution is easier to code and probably faster on a normal CPU. The larger the blur, the more likely the FFT will win in performance.

04-21-2012, 07:18 PM
thank you for the practical answer :) you've probably saved me a lot of time.

06-06-2012, 01:50 PM
found this last night, strange that i didn't find it when i was researching the topic.. ken musgrave's dissertation on procedural landscapes -


currently fishing for ways to improve the efficiency of the 3d perlin algorithm.. looks real nice but takes over a minute to process for 8 octaves (over 1.4 trillion cubic interpolations..) and atm only rotation on the y-axis is implemented (that would add a few trillion transcendental functions).

thought about using a precomputed 3d array.. eg. generate an 8-8-8 perlin array, then use tricubic interpolation to flush that out to say a 64-64-64 array (2^18 cubic interpolations) so there are 8 'curved' points between each original 'sample'. the 64-64-64 array is then read with linear interpolation (0.33 trillion lerps).

for some reason my compiler crashes with very large arrays.. i can do 4-4096-2048 (2^25) but not 2^26.

06-06-2012, 09:45 PM
If your compiler is a 32-bit compiler, it would likely not be able to generate data much more than 1GB or so in a single allocation (you're breaking between 256MB and 512MB, which might also be a 32-bit compiler limit).

One efficiency insight is that the scale of each octave varies. Thus, a small power of two array would capture the first octave at adequate resolution, one twice as large for the next octave, and so on. It doesn't necessarily save you any time in the initial generation phase, but as you zoom in you only need to compute the new octaves of data and if you zoom out then you already have much of the data precomputed. Caching can also be very important for certain gradient-based effects such as erosion ( http://dmytry.com/mojoworld/erosion_fractal/home.html has an example ).

Also, if you're looking for visual effects rather than trying to do fancy processing of the generated data, consider http://mrl.nyu.edu/~perlin/demox/Planet.html for inspiration. It uses a coarse-to-fine scheme for drawing terrain zooms. Users get to see much of what they're interested in very quickly and the software can continue to generate additional detail as the user waits longer. Caching of lower-resolution details and variable numbers of octaves may also be at work here to keep the amount of computation constant from frame to frame.

And on the Musgrave front, consider that only a short 20 years later, what he described can be done in real-time directly in your browser ( http://codeflow.org/entries/2011/nov/10/webgl-gpu-landscaping-and-erosion/ ). A more optimized version would probably run a bit faster, but things are usually memory bandwidth-limited these days. Except for pathological cases like procedural generators, of course.

And again, simplex noise is somewhat more computationally efficient than the interpolant used in straight Perline noise. And a simple 2D wavelet-type "scale-and-add" operation will always be faster than a volume interpolation. If you can push your basis function texture to the graphic card, it will do the interpolation, scaling, and adding all for you. Plus rotation, too, if you're into that sort of thing.

06-07-2012, 02:18 PM
If your compiler is a 32-bit compiler, it would likely not be able to generate data much more than 1GB or so in a single allocation (you're breaking between 256MB and 512MB, which might also be a 32-bit compiler limit).

quite right, i was only looking at the indexing not the footprint :P

i'm going to have to think about what you've said as i may be going about this in the wrong fashion. my implementation seems very elementary to me without any optimisations between octaves. i'll paste it below in case you feel like (lol) poring through someone else's code..

simplex noise didn't sink in at all. i'd imagine a year of familiarisation with 2,3d thinking may change that. fortunately, since this is more of an exercise, the only thing i'd really gain by speed improvements is the time spent in development. ty again.

for (y = 0; y < 2048; y++) {
fla = (float)y / 651.580337f; // 2047/pi
fy = cos(fla) * hfbase; fr = -sin(fla) * hfbase;
for (x = 0; x < 4096; x++) {
hf[hu][x][y] = 0;
flo = (float)x / 651.7394919613114f + adjlong; // 4095/tau
fx = cos(flo) * fr; fz = sin(flo) * fr;
// fx, fy, and fz mapped to sphere of radius 1 with center at origin if hfbase is removed from above

float sum = 0.f;
for (j = 0; j < woct; j++) {
dx = fx * pp2[j]; dy = fy * pp2[j]; dz = fz * pp2[j];
if (dx < 0.f) dx += 131072.f; ix = (int)dx; dx -= ix; x1 = ix & 0x1f;
if (dy < 0.f) dy += 131072.f; iy = (int)dy; dy -= iy; y1 = iy & 0x1f;
if (dz < 0.f) dz += 131072.f; iz = (int)dz; dz -= iz; z1 = iz & 0x1f;

x0 = x1 - 1; x2 = x1 + 1; x3 = x1 + 2;
y0 = y1 - 1; y2 = y1 + 1; y3 = y1 + 2;
z0 = z1 - 1; z2 = z1 + 1; z3 = z1 + 2;
x0 &= 0x1f; x2 &= 0x1f; x3 &= 0x1f;
y0 &= 0x1f; y2 &= 0x1f; y3 &= 0x1f;
z0 &= 0x1f; z2 &= 0x1f; z3 &= 0x1f;

p0 = tricint(dx, perlin[x0][y0][z0], perlin[x1][y0][z0], perlin[x2][y0][z0], perlin[x3][y0][z0]);
p1 = tricint(dx, perlin[x0][y1][z0], perlin[x1][y1][z0], perlin[x2][y1][z0], perlin[x3][y1][z0]);
p2 = tricint(dx, perlin[x0][y2][z0], perlin[x1][y2][z0], perlin[x2][y2][z0], perlin[x3][y2][z0]);
p3 = tricint(dx, perlin[x0][y3][z0], perlin[x1][y3][z0], perlin[x2][y3][z0], perlin[x3][y3][z0]);
pa = tricint(dy, p0, p1, p2, p3);

p0 = tricint(dx, perlin[x0][y0][z1], perlin[x1][y0][z1], perlin[x2][y0][z1], perlin[x3][y0][z1]);
p1 = tricint(dx, perlin[x0][y1][z1], perlin[x1][y1][z1], perlin[x2][y1][z1], perlin[x3][y1][z1]);
p2 = tricint(dx, perlin[x0][y2][z1], perlin[x1][y2][z1], perlin[x2][y2][z1], perlin[x3][y2][z1]);
p3 = tricint(dx, perlin[x0][y3][z1], perlin[x1][y3][z1], perlin[x2][y3][z1], perlin[x3][y3][z1]);
pb = tricint(dy, p0, p1, p2, p3);

p0 = tricint(dx, perlin[x0][y0][z2], perlin[x1][y0][z2], perlin[x2][y0][z2], perlin[x3][y0][z2]);
p1 = tricint(dx, perlin[x0][y1][z2], perlin[x1][y1][z2], perlin[x2][y1][z2], perlin[x3][y1][z2]);
p2 = tricint(dx, perlin[x0][y2][z2], perlin[x1][y2][z2], perlin[x2][y2][z2], perlin[x3][y2][z2]);
p3 = tricint(dx, perlin[x0][y3][z2], perlin[x1][y3][z2], perlin[x2][y3][z2], perlin[x3][y3][z2]);
pc = tricint(dy, p0, p1, p2, p3);

p0 = tricint(dx, perlin[x0][y0][z3], perlin[x1][y0][z3], perlin[x2][y0][z3], perlin[x3][y0][z3]);
p1 = tricint(dx, perlin[x0][y1][z3], perlin[x1][y1][z3], perlin[x2][y1][z3], perlin[x3][y1][z3]);
p2 = tricint(dx, perlin[x0][y2][z3], perlin[x1][y2][z3], perlin[x2][y2][z3], perlin[x3][y2][z3]);
p3 = tricint(dx, perlin[x0][y3][z3], perlin[x1][y3][z3], perlin[x2][y3][z3], perlin[x3][y3][z3]);
pd = tricint(dy, p0, p1, p2, p3);

o = tricint(dz, pa, pb, pc, pd);
if (j < 2) {
sum += o * pn2[j + 1];
else {
if (o > 32767.5f) o = 65535.f - o;
sum += o * pn2[j];
sum *= sumdiv;
i = (int)sum;
if (i < 0) i = 0;
else if (i > 65535) i = 65535;
hf[hu][x][y] = i >> 8;
} }

wow, that gradient b/g makes code a trip :p

(realsiing that i ought to add latitudinal rotation as well as it only needs to be performed for each pixel, not each octave..)

06-07-2012, 10:55 PM
It's tough to say how things will work out without seeing the data structures behind that code, but it seems like it ought to be fairly straightforward to decode.

Depending on your compiler and options, the truncation to integer may be one of the slowest operations that you have. Similarly, if you're targetting x87 code rather than something like SSE2, you're leaving a lot of performance on the table. Indexing the 3D array may (or may not be) more performance intensive than indexing a 1D array with precomputed offsets, especially on P4-class processors that are lacking in barrel shifter resources.

I'm a bit confused by the number and type of interpolations that you have. Classic Perlin noise has 7 linear interpolations (4 in x, 2 in y, and 1 in z). The fractional index term is first modified by (3*t*t-2*t*t*t) to get the desired smooth behavior (the improved Perlin noise uses a better quintic function).

Spinning the sphere should definitely be something done outside of the main loop. The code that I normally use for such computations passes in the cartestian coordinate for evaluation and doesn't know anything about how the world is sampled.

06-09-2012, 01:26 PM
! dang..

i learned interpolation beyond linear from julius o. smith's documents (stanford dsp professor), which has made image processing very interesting. what is generally called 'bicubic' interpolation in audio is a 1d process involving four samples.. discretely different from the terminology in graphics where 'bicubic' refers to 2d interpolation, and, if i understand it, 'cubic' indicates this 2nd order method. my method of course requires 64 samples in 3d space... no wonder it's bloody slow!

float tricint(float td, float t0, float t1, float t2, float t3){
float f0, f1, f2;
f0 = (t3 - t2) - (t0 - t1);
f1 = (t0 - t1) - f0;
f2 = t2 - t0;
return ((f0 * td + f1) * td + f2) * td + t1;

illustrated in a fairly useless app here.. (unfortunately this screenshot doesn't strongly illustrate how this method produces samples outside the sample range).

some of the articles i've found mention the interpolation method generating samples outside of the [0,1] range so i thought i was up to speed. (some readers may benefit from noting that i have labeled this 'tricubic' attempting to correlate nomenclature between the two fields, which may indeed be incorrect).

will have a think.. may rip it out as i didn't try 3d linear.. may stick with it as i like the output. if this was audio, soon i'd be advertising 2048^3 sinc interpolation and have an army of fake profiles wiping the floor with anyone who didn't feel this was a necessity :)

06-10-2012, 04:19 PM
I realize that you're going at this from a clean room approach, but there is a lot to be learned from looking at some of the reference implementations of Perlin noise ( http://mrl.nyu.edu/~perlin/noise/ is his 2002 noise function implemented in Java; a few minutes shopping on the Perlin Noise Wikipedia page will also pay off handsomely ).

Also, when summing octaves of perlin noise, I strongly recommend not having an exact integer distance between octaves because any discontinuities that show at lattice points will be amplified. This sort of thing is usually only an issue for smoothing functions that aren't second-derivative continuous like the original Perlin noise (improved Perlin noise uses a quintic function that's smooth in its first and second derivatives).

On the interpolation subject, linear is 1D, order 1; bilinear is 2D, order 1; quadratic is 1D, order 2; biquadratic is 2D, order 2; cubic is 1D, order 3, bicubic is 2D, order 3; so on and so on. You'll need order + 1 points for each dimension of the interpolation (1D linear needs 2, quadratic needs 3, cubic needs 4, quartic needs 5, etc.; 2D linear needs 4, quadratic needs 9, cubic needs 16, etc.)

06-10-2012, 11:32 PM
one of the first things learned when one starts to do things is that the precedent history of humans doing things is often somewhat disparate :)

! quite right about lacunarity. i only recently added lacunarity and persistence and exploring the subtleties (eg. <2 = smoother coastlines). imperfection/glitch is valid and desirable in audio, which is partially why i document the 'failures'.

i've been playing with wilbur (haven't installed fantasy carto) at points and observing.. (this kind of analysis more or less decimated my 'enjoyment' of music hehe). you've packed a lot of options into wilbur.. (it's not often one finds an export mode for povray ime).

the app is moving forwards again! :) rendering is now practically fast enough to animate here using a 4x4 block.

06-11-2012, 01:43 AM
In case of emergency, there's always techniques based on the inverse FFT. If you fill a 2D image with white noise and then run an inverse FFT on it, you get the basic fBm fractal. If you shape that noise, you get other sorts of effects. Such surfaces also have the nice property that they tile. A good IFFT implementation can be a fair amount faster than some of the noise-based spectral synthesis techniques.

http://people.cs.kuleuven.be/~ares.lagae/publications/LLCDDELPZ10SPNF/LLCDDELPZ10SPNF.pdf is also a fun and useful read.