# Very Large Maps, is it a problem?

Show 40 post(s) from this thread on one page
Page 4 of 4 First ... 234
• 05-07-2010, 10:49 PM
Talroth
Quote:

Originally Posted by a2area
so that's what being "ninja'd" looks like (0:

I can understand millions of colors.. as it makes a visible difference on screen... but why would anyone want to go beyond 64-bit? If it's not visibly different to the eye... it seems the only use would be for the sake of holding a wider array of data???

PS.. @antheon and jwbjerk... FYI, i wasn't being argumentative earlier just in case it came across that way (i dont think i did?).. just trying to clarify my earlier point about pixel size.

The expanded range for storing data of the colours isn't for displaying the colours themselves, but rather it is for doing maths on them.

With 32 bit colour methods you usually only have 8 bits per channel, and 4 channels. If you run two colours through a function, each colour component doesn't really have a large scale to work on. If you keep running colours through different functions then you risk diluting the true value of the colour.

Think of it as doing math with just Integers. 1, 2, 3, etc. When you go 2/3 = 1 then you're close, but you're not right. If you keep doing this, then you get farther and farther from the real answer, even if you round it back to an integer at the end.

Going to 256 bit colour is like giving yourself a few decimal places to work with. 2/3 isn't 1, it is 0.67, which when run through the next function it is a lot closer to the real answer than 1 would be.

At the end of the day there is no scientific reason to really display more than 32bit for most humans. Even 24 bit RGB is 'close enough' from what I remember.
• 05-07-2010, 11:28 PM
RobA
Regarding DPI/PPI and Gimp: The default mode is "dot for dot", so a 300x300px image at 300dpi will take up 300x300 screen pixels at 100%. If, however you turn off dot-for-dot mode, that same image is now drawn on the screen as a 1" square, i.e. it is scaled to convert the image dpi to my display dpi and make things show up "real world size".

-Rob A>
• 05-08-2010, 11:43 AM
Midgardsormr
And in Photoshop, viewing Actual Pixels (or double-clicking the Zoom tool) is equivalent to Gimp's "dot for dot." Print Size will show you an approximation of the real-world size if you were to print the document. Note that it is only an approximation that can be made more accurate by changing your Screen Resolution setting in Edit > Preferences > Units & Rulers. The default of 72 is likely to be incorrect for the majority of screens. Mine, for instance, is somewhere around 101 ppi.

This is true in PS CS3 and CS4. I cannot verify the controls in other versions.

Also, to expand a bit on what Talroth is saying:
Although the eye won't really notice the difference in colors at higher than 8 bits per channel (24-bit RGB), as you start to process an image, that lack of precision will start to become visible in banding artifacts. If you start to see banding in your gradients, that's likely the cause. In terms of heightfield processes, low bit-depth will result in noticeable terracing, where the land steps up abruptly instead of having smooth slopes.

I've been going to school to learn digital compositing for film, and the first step when bringing an image into the compositor is to convert it to 32-bit per channel floating point linear color (float, for short), even if the source is only 8 bits per channel. At first, it's like carrying around a drop of water in a 10-gallon bucket, but as the software processes the image, its information grows to fill the space. At the end of the process, the new image is usually output back to 8 bits per channel, but everywhere in between I'm working in float.
• 05-08-2010, 01:06 PM
Hai-Etlik
Quote:

Originally Posted by a2area
Actually, you are right.. which means that PPI is totally irrelevant in relation to document fidelity unless your image is resampled as well. Still.. the original point remains.. if a document is 100 pixels by 100 pixels... then it remains so whether it's set at 1ppi, 10ppi or 100ppi etc..

It's an option to choose which behaviour you want: View -> Dot for Dot
• 05-08-2010, 05:42 PM
Talroth
Quote:

Originally Posted by Midgardsormr
At the end of the process, the new image is usually output back to 8 bits per channel, but everywhere in between I'm working in float.

And to expand even more.

Another good reason for expanding to a 'full' sized data type, as in a 32/64 bit float or int is that, provided you have the memory to handle it, the processing of 'full size' data is faster on most hardware than dealing with a 'packed' data type like an 8-bit byte.

Most systems deal with a 32 or 64 bit memory address, and have been steam lined to deal with those. Accessing 'sub data' types usually means reading one part of a full sized memory address, which is an extra step, or more, than dealing with the whole data block.
• 05-09-2010, 05:23 AM
Redrobes
@Mid: Good info there and I agree with it all.

@Talroth: Technically you are correct only at the purest level on the CPU within the cache and would be true for some processes but in many cases and these cases are ones like images where you have a lot of data the main processing slow down is fetching the data from memory to the CPU in order to get processed. The CPU runs at some ludicrous speed like a few GHz and in most cases processes several things at the same time too but you have to get the data from RAM into the CPU cache which goes over the front side bus and this is very fast but nowhere near as fast as the CPU's clock or processing ability. Therefore keeping things in the cache or getting things into the cache before they are required is the key to getting fast processing. With large images the cache is way too small to hold the image therefore the whole image has the pass through the cache. So in effect the whole image has to go from RAM over the bus into the CPU and back again to the RAM. Because of this the smaller the amount of transfer the faster the processing so in these types of cases smaller memory = faster.

I agree with Mid that float is better than 8 bit so in effect each pixel would be RGBA x float in size which is 16bytes per pixel. With a 10Kx10K image thats now 1.6Gb per layer and if now that has to be paged to HDD then your into crunch time again. I certainly agree tho that it makes a big difference with height maps where 16 bit greyscale is a must.

A float is 4 bytes and a double is 8 bytes. Film studios go for float / double when processing. There is a thing called a half float which is 2 bytes and is a good compromise between giving a lot more range than the 1 byte per component and full float which uses a lot of memory. Not everyone supports the half float format tho.

One more thing to consider which is another curve ball is the streaming SIMD extensions or the SSE. You will see CPUs supporting SSE, SSE2, SSE3 and SSE4. These are done with multiple very large registers - cont remember exactly how big but something like 16bytes and they can process multiple things in one register - like 4 floats or 2 doubles or whatever. So packing more smaller sized components into one register is better than less of bigger size. Piling on the complications also entails saying that most 32 bit apps are compiled without the use of SSE instructions since some processors have them and others don't. Ok you can ask your CPU if you have them and code two or more paths but generally most people other than video encoding codec writers don't do it cos its too much hassle testing all the configurations. However with the x64 instruction set - 64bit apps - SSE 1 through 3 are built into it so you know your getting them. In fact normally you cant turn these off for 64 bit apps. So PS for 64 bit will be using these and that makes it faster as well. I can also add that all intel 64bit CPUs run 32 bit apps in CPU emulation mode which is slower than native 64 bit so its a multiple whammy going 64 bit.

So to sum up for those getting glassy eyed and nerded out, 64 bit is good for memory and speed and the smaller the memory footprint of your image the better but having some extra precision in image manipulation might be necessary. If you do up the precision then its a good idea to take the time to know what this actually means and have an idea about how much precision you need cos you are making a trade off with it. More precision is better for precisions of course but it might kill your performance. The film industry wants quality and wont trade that for performance and would just buy in more compute resource to cope. Weta Digital (who did Avatar / LotR etc) have a combined compute which puts them at about 25th most powerful publicly known compute resource in the world. Us lesser mortals don't have that option so have to work on getting the right balance.
• 05-09-2010, 10:00 AM
Talroth
@Redrobes: True, there is a huge number of variables that come into play. Likely we shouldn't start talking about the effects of multi-core catch misses and dealing with the standard thread based processing system. And we really shouldn't get into talking about what you can do with floats on a graphics card, with insane hardware optimization for these things.

In the end, the data fetching needs to be done either way, and it can become an issue of whether the processing of the blocked data saves more time than the faster In Processor time.
Show 40 post(s) from this thread on one page
Page 4 of 4 First ... 234