PDA

View Full Version : Very Large Maps, is it a problem?



PhrozenDragon
04-27-2010, 06:24 PM
Hello all. I want to make a world map of a planet similar in size to Earth. I've read some tutorials here (which were very helpful) and I think I have the basics down on how to actually make a map. Fiddling around with a map the size of 4000x2000 however, I found out there weren't enough pixels for me to add any detail to it.

Since it seems 1 mile=1 px is the standard around here, I'm wondering if I can create a map of something like 20,000x10,000 px (or perhaps even 40,000x20,000, though that is perhaps not necessary) without my computer starting to lag. I tried it briefly just zooming and drawing a little, and it worked very well. Will that stay true once I add tons of layers and other things to the map as well?. It's at this point only for my own reference and understanding and will be in black&white, if that makes any difference.

Very cool forum though, I never thought there would be a community dedicated to precisely what I was looking for when I started searching for tips on how to make fantasy maps. This was more than I had hoped for :)

Ascension
04-27-2010, 07:15 PM
More layers = slower performance. The main thing is to see just how big you can go. I can do 25,000 x 12,500 on my machine albeit with major slowdown. I routinely do under 10,000 with ease. Just pick a size, try it and see what happens.

a2area
04-27-2010, 08:29 PM
Layers and groups of layers and layer effects make a huge difference.. what i did on my recent map of Gotha, which is 15000 X 7500 (about 2mi/px at equator), was to keep data elements or things that i may want to edit later and in separate layered files from the master. For instance all of my rivers are on several layers depending on size.. and i'd keep them in another file with a flattened terrain map as the background so i can see what i'm doing when i edit. Then i bring over the rivers to the main file and flatten them as a layer rather than having another group. Later if i want to change the rivers for some reason i trash the layer in the main file and go to my rivers file and change it then drag over and flatten again. I do the same thing with climate, rainfall, masks etc.. bring them over when i need them, use them for that purpose and then delete them from the main file. Better to take up hard disk space with separate manageable files (preferably on a peripheral drive) than to clog up your ram with a huger than necessary document. Cuts down on time for screen refresh and applying effects.

Something else i have picked up is having two files... one labeled filename_backup which i'll save out to occasionally. With work that big saving can take a while and you never know when something might glitch and freeze up.. at least you'll have an extra file rather than one corrupt file!

Kodiak
04-27-2010, 11:03 PM
Another thing you may want to do is build all of you continents seperately, so you are only generally working with maps 1/4 to 1/8 of your desired world size. When you have finished all of your continents, make your 40000x20000 blank canvas, and add your continents to it. This way you only have to deal with slow down for the final stages of your map as opposed to getting decreased performance the longer you work on it.

Hai-Etlik
04-28-2010, 01:02 AM
I'd suggest you work in a vector format, possibly even a geospatial vector format and use something like QuantumGIS.

Then you can make maps for any particular area in an appropriate projection and you can pretty them up in a raster graphics editor, or non-spatial vector editor.

I've been thinking of doing just this myself when I get some time.

Yandor
04-28-2010, 01:20 AM
I find this discussion very interesting, cause you guys are talking about all this huge image size, but what dpi are you running it at? cause I ran a 3000x15000 300dpi (mind you I don't have the greatest computer) and it was running at a decent rate, faster than some large files I've ran... but just wondering, cause if your doing a 12,000x10,000 at 72 dpi thats a lot different than 300 dpi =D

I've had to break up into smaller manageable files at times, when the map had way too many layers for the size I was running it at, so that does help out a ton, excellent suggestions from everyone so far!

Hai-Etlik
04-28-2010, 01:31 AM
The resolution is nothing more than a bit of metadata that suggest how large the image should be displayed or what size scanned/sampled. A 20k x 40x raster is a 20k x 40k raster regardless of whether you have a little tag that says 72dpi, 200dpi, or 20m/px.

a2area
04-28-2010, 01:37 AM
Yes.. Hai Etlik is right.. 15000 pixels is 15000 pixels.. period.. displaying more of them in an inch means shrinking the apparent size (visually) but not the document pixel content.. and would be dictated by what the image is going to be used for.

rough stats: photo/art magazine 300-600ppi, glossy mag 300ppi, newspaper 150ppi, web 72ppi.

So for example... you could use the same image you used in a 4" x 4" glossy magazine ad to fill an 8" x 8" space in a newspaper.

another example... a 2" x 2" document at 300ppi (making it 600px X 600px) is the same as a 4" x 4" document at 150ppi (still 600px X 600px)

Aenigma
04-28-2010, 02:27 AM
I wonder what size of maps I could make if I hooked up my computer to the local nuclear power plant :P

Gamerprinter
04-28-2010, 02:30 AM
There's a stickied discussion in the General Forums on the difference between DPI and PPI, its worth a look, based on some of the discussion here. But to answer your question, it really depends on what software you are using. Sounds like Photoshop or GIMP, both with lots of layers and pixel area are problematically slow. That's why I use a vector app with robust image handling capability, Xara Xtreme, and that I create most of my maps with the intention of printing - as I run Gamer Printshop and specialize in printing large format maps. If you're using an image editor and using lots of layers, performance will go down, its how the application manages memory.

Good luck in your issue, and welcome to the Guild.

GP

Redrobes
04-28-2010, 08:04 AM
Bigger maps does not necessarily mean a slow down its just generally true due to the way they have programmed the app. I have done raster maps up to 40K using about 30 layers but you cant generally do it with the usual apps and methods. The main thing I would suggest is to tile your map and deal with one tile at a time - much like doing one continent as Kodiak says.

I would suggest first looking at what res you need. I expect that you need a certain resolution at some points of the map and higher in specific other areas and probably much much higher in cities etc - a resolution which is completely unreasonable to map the whole world at. I.e. Although you may need quite a high res for the world map its probably not 40K sqr. I don't think there are any standard pixels per mile type resolutions and in any case some apps will cope with a variety of resolutions and scales.

waldronate
04-28-2010, 10:30 AM
There's also a matter of personal expectations. Some folks are perfectly fine with operations taking 20 seconds, others find operations that take longer than 2 seconds to be intolerable. I recommend working at a medium resolution (4kx2k or so) to get the world and then do more local areas as has been suggested. This technique will balance the amount of data the machine has to handle against the quality of the final output. And, the bigger the machine, the larger the map before you hit the slowdown where it has to swap to disk continually.

The biggest problem with this technique is that certain whole-image operations are extremely sensitive to the size of a pixel in order to get plausible results. The most common example is fluvial erosion. In the case of erosion, a river is almost never even a pixel wide, but most flow analysis and erosion algorithms require and/or generate rivers more than one pixel wide. The results are beautiful, but they have a certain signature to them, just as most simple fractally-generated terrains do.

Yandor
04-28-2010, 11:19 AM
hmm weird my second post didn't make it, I understand the whole dpi when printing and all that, I was just wondering what people personal preference on the dpi when using such a large image size. Cause I wouldn't assume you would have a 300 dpi on a 10,000x10,000 image, but on the reverse side if you are using that big of an image do you lower the dpi down a lot (say in the 40's 50's instead of the standard 72)? Thats really what I was getting at

Gamerprinter
04-28-2010, 11:35 AM
It depends on your final needs - how will the map be used. If intended for print, like most my maps, I say a minimum of 200 dpi for print. If your intended use is for a Virtual Terrain app, 30 or 50 ppi is plausable, though I know many VT users who prefer 100 ppi, so you can zoom in and still see detail. If your map will only appear as a way graphic for say PbP, 72 or 96 ppi is optimum as it is only intended to fit on a computer monitor and that's likely to be 1024 x 768 or some other screen size factor.

Whatever your use for a map is the pixel dimension it should be designed for. It varies.

GP

a2area
04-28-2010, 11:49 AM
hmm weird my second post didn't make it, I understand the whole dpi when printing and all that, I was just wondering what people personal preference on the dpi when using such a large image size. Cause I wouldn't assume you would have a 300 dpi on a 10,000 x 10,000 image, but on the reverse side if you are using that big of an image do you lower the dpi down a lot (say in the 40's 50's instead of the standard 72)? Thats really what I was getting at

As i get into larger and larger maps i have to tile them, especially in wilbur, probably because i'm running vista (ugh) on a mac. It really doesn't matter what the ppi is until you output it so i generally work at 72 ppi for monitor/web resolution. ppi doesn't make any difference in document size as long as the pixel dimensions stay the same... the only real working difference is that when you zoom to 100%, the larger the ppi the smaller it will appear on the screen.

PhrozenDragon
04-28-2010, 03:37 PM
Thanks for the replies, way more than I expected.



Something else i have picked up is having two files... one labeled filename_backup which i'll save out to occasionally. With work that big saving can take a while and you never know when something might glitch and freeze up.. at least you'll have an extra file rather than one corrupt file!
I never thought of it, but I'll remember to do that.

As for keeping multiple files, I don't know that I have the patience to carry that out just yet. I suppose I'll have to resort to it though if things to start to slow down though. Though I could of course cut away a lot of ocean to conserve space. No need to go around the entire globe if there's nothing but water at the edges.


Another thing you may want to do is build all of you continents seperately, so you are only generally working with maps 1/4 to 1/8 of your desired world size. When you have finished all of your continents, make your 40000x20000 blank canvas, and add your continents to it. This way you only have to deal with slow down for the final stages of your map as opposed to getting decreased performance the longer you work on it.
I thought of that, but the world is mostly one super-continent, so I figured it would be hard to make sure mountain ranges, rivers and forests lined up in the final version if I did it this way.


But to answer your question, it really depends on what software you are using. Sounds like Photoshop or GIMP, both with lots of layers and pixel area are problematically slow. That's why I use a vector app with robust image handling capability, Xara Xtreme, and that I create most of my maps with the intention of printing - as I run Gamer Printshop and specialize in printing large format maps. If you're using an image editor and using lots of layers, performance will go down, its how the application manages memory.
I have 4GB of RAM, running Vista. Also, does it matter that it's going to be just black&white?

I guess I'll just have to test and find out. And yes, I'm using Photoshop.



I would suggest first looking at what res you need. I expect that you need a certain resolution at some points of the map and higher in specific other areas and probably much much higher in cities etc - a resolution which is completely unreasonable to map the whole world at. I.e. Although you may need quite a high res for the world map its probably not 40K sqr. I don't think there are any standard pixels per mile type resolutions and in any case some apps will cope with a variety of resolutions and scales.
Well of course I can't add citiy maps to it, I didn't mean that great amount of detail :P

I do hope it will be helpful for me to be able to get a sense of how big (and far away) things are from each other if I can mark down more than just a country and capital, but also towns and smaller lakes and forests (which I found hard to represent in my smaller map).

Midgardsormr
04-29-2010, 12:05 PM
4GB on Vista might be straining for an image that size. If you were in XP, you'd have an easier time of it. Windows 2000 would be better still. Seems backward, doesn't it? If you start to have problems, try turning off Aero and any other unnecessary bells and whistles. Vista seems to have pretty good memory management in my experience, but it's still rather bloated.

Reducing the color depth, though, will help you a lot. Are you really talking black-and-white (1 bit) or grayscale (8-bit)? A 1-bit image is going to be light enough that you'll probably be fine even at the huge size you're talking about. An 8-bit grayscale is still better than full color, as it has only 1/2 the amount of information that an RGB image has (assuming that you're working non-destructively. 1/3 if you're not using masks.) Just make sure you're actually in greyscale mode, though, or you won't get any of the benefits. Image > Mode > Grayscale.

PhrozenDragon
04-29-2010, 04:39 PM
Reducing the color depth, though, will help you a lot. Are you really talking black-and-white (1 bit) or grayscale (8-bit)? A 1-bit image is going to be light enough that you'll probably be fine even at the huge size you're talking about. An 8-bit grayscale is still better than full color, as it has only 1/2 the amount of information that an RGB image has (assuming that you're working non-destructively. 1/3 if you're not using masks.) Just make sure you're actually in greyscale mode, though, or you won't get any of the benefits. Image > Mode > Grayscale.Yes, I meant 8-bit. Sounds worth a shot then.

The Stoat
05-06-2010, 01:13 PM
I am designing a campaign world that I intend to be very large. The two things I am finding I need to do is tiling and different scales. The large scale maps give overviews and keep the tiles in alignment. I am using Gimp for my mapping. I am learning as I go how to transfer the different elements between maps so I can change the scale of elements such as roads, mountains and rivers.
I imagine it as taking a page from the google earth playbook letting me scale up to continents and down to a village and both will hold consistency.

Natai
05-07-2010, 11:28 AM
Here's an excerpt from a post I made in this thread Question on huge size images (http://www.cartographersguild.com/showthread.php?10203-PS-Question-on-huge-size-images)



The image is 20122x11634 @ 450 px/in. The PSB file is about 3.7GB saved, and PS consumes around 7.75GB in memory while I'm working with it. So tilt's point is a good one - go with 2-3 times the amount of RAM relative to the saved file size.

So, bottom line if you're working with files larger than about 1.5GB, you will probably benefit from upgrading to 64-bit. Just be sure to keep in mind what Redrobes posted - you're entire system has to be 64-bit, the CPU, OS and the application.


That thread might be useful as it contains some good information about RAM, memory pointers, etc. And just about all the posts are referencing maps at the scale you were talking, 20k x 10k and up.

jwbjerk
05-07-2010, 03:04 PM
...the only real working difference is that when you zoom to 100%, the larger the ppi the smaller it will appear on the screen.I don't know about GIMP, but in photoshop zooming to 100% means that every pixel on your screen represent a pixel in your document. I.E. there's a one-to-one correspondence. DPI or PPI is irrelevant to how "100%" looks on screen.


Some other Tricks:

* I like to use "Solid Color" Adjustment layers when they make sense in my maps. They create only 1 channel worth of information instead of 4.
The "foundation" layer that contains the shape of my coastline is a "solid color" layer, and i clip mask everything i want to conform to the coastline to it. To create a solid, semitransparent layer of color, like to delineate a temperature zone, political boundary, or a biome, i'll generally use a solid color layer. I can just paint on the mask to change the size-- i don't have to worry about selecting that precise color again, or accidentally painting on it.


* Gradient maps are a great way to make topographic effect that adjust themselves as you edit the map. They are also gentle on the file-size. See step #11 of my mini tutorial. (http://www.cartographersguild.com/showthread.php?9919-WIP-This-Orb-world-building-and-mapping-project&p=110698&viewfull=1#post110698)


* In photoshop (and i think GIMP) you can set the bit depth of each channel. Going from 8 bits/channel to 16 or 32 greatly increases the file size and processing time. For map-making you will not regret leaving it at 8 bits/channel.


* You can never have too much RAM or scratch disc space for Photoshop. In Photoshop's preferences you can allow additional HardDrives to serve as a scratch disc. (the precise way you do this varies with the version). Adding fast drives, especially with lots of GBs free to the scratch disk list can greatly boost photoshop's performance on those time-consuming commands.

su_liam
05-07-2010, 03:51 PM
* In photoshop (and i think GIMP) you can set the bit depth of each channel. Going from 8 bits/channel to 16 or 32 greatly increases the file size and processing time. For map-making you will not regret leaving it at 8 bits/channel.


Can you really? Do tell. If I could work out how to do this, it would be enormously helpful. I'd give a lot to be able to edit the elevation data in 16- or, better, 32-bit grayscale and apply gradient maps and lighting effects(I wish that was a layer effect) in 8-bit RGB.

EDIT: That, by the way, wasn't intended to be as smarmy, smug and generally asinine as it came out. I just really, really want to know how.

jwbjerk
05-07-2010, 05:26 PM
Can you really? Do tell. If I could work out how to do this, it would be enormously helpful. I'd give a lot to be able to edit the elevation data in 16- or, better, 32-bit grayscale and apply gradient maps and lighting effects(I wish that was a layer effect) in 8-bit RGB.
OK.

24541

To be clear you set the bit depth for the document as a whole. 32-bits was added recently CS3 or CS4, but the 16 bit option has been around at least for several versions before that.

a2area
05-07-2010, 06:53 PM
I don't know about GIMP, but in photoshop zooming to 100% means that every pixel on your screen represent a pixel in your document. I.E. there's a one-to-one correspondence. DPI or PPI is irrelevant to how "100%" looks on screen.



Actually, you are right.. which means that PPI is totally irrelevant in relation to document fidelity unless your image is resampled as well. Still.. the original point remains.. if a document is 100 pixels by 100 pixels... then it remains so whether it's set at 1ppi, 10ppi or 100ppi etc..

su_liam
05-07-2010, 07:20 PM
Okay. I gotcha. I usually do my HF editing and color texturing in separate documents anyway, when I don't just use Wilbur. Interestingly 32-bit was really crashy in the windows machines at school, but seems more reliable on my mac. Adobe and Apple get along so well inside the computer its a little odd how hammer-and-tongs they've gotten out here in the world.

Antheon
05-07-2010, 07:33 PM
Of course the dimensions will stay the same. ppi isn't pixel-related, it just says how many pixel can be filled within an inch of length. So, if I want a document with 100 pixels in both directions, it will still be 100 pixels, regardless of the ppi selected. Also the file size stays the same. Working on the screen it doesn't matter what ppi you select as you are working with pixels and not inches/centimeters/millimeters/whatever. But if you do it the other way around and configure the "real" size of the image the pixel dimensions will change of course. It's simply because the pixels used for a document depend on the "real" dimensions. Open a new document in any graphic program you like, may it be the GIMP or PS or whatever, and create a 100x100@300ppi. Then do the same at one pixel per inch. Both documents seem to have the same size, you say? Of course they do! You entered the same pixel length. It doesn't matter if there can be 100px or 2px within one inch, 100px still are 100px. The difference is, that the first one fits perfectly inside this inch and the other on is 50 inches long. If you change the dimensions in your ruler to inch or something else, you might notice the difference between both examples. : )


Antheon
hoping he doesn't talk nonsense ...

a2area
05-07-2010, 08:09 PM
Hey, stop repeating me Antheon! (0:
Now which is heavier.. a pound of feathers.. or a pound of lead? he he..


actually what i'm still going on about (like a senile old man) originates from an earlier post where someone said

but just wondering, cause if your doing a 12,000x10,000 at 72 dpi thats a lot different than 300 dpi
... in reality there is no difference in your working environment.


I have no idea what bit-depth is for however.. although have often wondered.. i suppose i should look into it at some point.

jwbjerk
05-07-2010, 08:28 PM
Hey, stop repeating me Antheon! (0:
I have no idea what bit-depth is for however.. although have often wondered.. i suppose i should look into it at some point.

Bit depth controls how many different colors a channel can have. For instance an 8 bit greyscale document can have 256 different "colors": black, white, and 254 greys in between. 256 is a number that requires 8 bits to store. With 256 levels each of red, green and blue, you can mix ~16 million distinct colors -- weather or not your monitor or eye can distinguish them.

A number that takes up 16 binary bits is much larger-- you get 65,000+ levels of grey in a 16 bit greyscale document. And some trillions of possible colors in RGB.

Antheon
05-07-2010, 08:31 PM
Yeah, I know that I was repeating you but it seemed that there is a misunderstanding about ppi in the workflow. : )
As for the bit depth it defines the colour palette you can use. 1-bit simply means 2^1 colours - therfor we have black and white. With two bits we can apply four colours: black, white and two greyish ones. With eight bit we do have 2^8 = 256 colours. I think we called that VGA back then. With 32-bit at our disposal we do have 4.2 billion distinct colours but often the program does only use 24-bit and 8-bit for alpha etc. Up to 48 bits we already have 281.5 trillion colours and there are graphic cards able to go even beyond that and support 64-bit colour!


Antheon
ready for Mr Sandman

EDIT:
Argh, I was too slow. *laughs*

a2area
05-07-2010, 09:08 PM
so that's what being "ninja'd" looks like (0:

I can understand millions of colors.. as it makes a visible difference on screen... but why would anyone want to go beyond 64-bit? If it's not visibly different to the eye... it seems the only use would be for the sake of holding a wider array of data???

PS.. @antheon and jwbjerk... FYI, i wasn't being argumentative earlier just in case it came across that way (i dont think i did?).. just trying to clarify my earlier point about pixel size.

Talroth
05-07-2010, 10:49 PM
so that's what being "ninja'd" looks like (0:

I can understand millions of colors.. as it makes a visible difference on screen... but why would anyone want to go beyond 64-bit? If it's not visibly different to the eye... it seems the only use would be for the sake of holding a wider array of data???

PS.. @antheon and jwbjerk... FYI, i wasn't being argumentative earlier just in case it came across that way (i dont think i did?).. just trying to clarify my earlier point about pixel size.

The expanded range for storing data of the colours isn't for displaying the colours themselves, but rather it is for doing maths on them.

With 32 bit colour methods you usually only have 8 bits per channel, and 4 channels. If you run two colours through a function, each colour component doesn't really have a large scale to work on. If you keep running colours through different functions then you risk diluting the true value of the colour.


Think of it as doing math with just Integers. 1, 2, 3, etc. When you go 2/3 = 1 then you're close, but you're not right. If you keep doing this, then you get farther and farther from the real answer, even if you round it back to an integer at the end.

Going to 256 bit colour is like giving yourself a few decimal places to work with. 2/3 isn't 1, it is 0.67, which when run through the next function it is a lot closer to the real answer than 1 would be.


At the end of the day there is no scientific reason to really display more than 32bit for most humans. Even 24 bit RGB is 'close enough' from what I remember.

RobA
05-07-2010, 11:28 PM
Regarding DPI/PPI and Gimp: The default mode is "dot for dot", so a 300x300px image at 300dpi will take up 300x300 screen pixels at 100%. If, however you turn off dot-for-dot mode, that same image is now drawn on the screen as a 1" square, i.e. it is scaled to convert the image dpi to my display dpi and make things show up "real world size".

-Rob A>

Midgardsormr
05-08-2010, 11:43 AM
And in Photoshop, viewing Actual Pixels (or double-clicking the Zoom tool) is equivalent to Gimp's "dot for dot." Print Size will show you an approximation of the real-world size if you were to print the document. Note that it is only an approximation that can be made more accurate by changing your Screen Resolution setting in Edit > Preferences > Units & Rulers. The default of 72 is likely to be incorrect for the majority of screens. Mine, for instance, is somewhere around 101 ppi.

This is true in PS CS3 and CS4. I cannot verify the controls in other versions.

Also, to expand a bit on what Talroth is saying:
Although the eye won't really notice the difference in colors at higher than 8 bits per channel (24-bit RGB), as you start to process an image, that lack of precision will start to become visible in banding artifacts. If you start to see banding in your gradients, that's likely the cause. In terms of heightfield processes, low bit-depth will result in noticeable terracing, where the land steps up abruptly instead of having smooth slopes.

I've been going to school to learn digital compositing for film, and the first step when bringing an image into the compositor is to convert it to 32-bit per channel floating point linear color (float, for short), even if the source is only 8 bits per channel. At first, it's like carrying around a drop of water in a 10-gallon bucket, but as the software processes the image, its information grows to fill the space. At the end of the process, the new image is usually output back to 8 bits per channel, but everywhere in between I'm working in float.

Hai-Etlik
05-08-2010, 01:06 PM
Actually, you are right.. which means that PPI is totally irrelevant in relation to document fidelity unless your image is resampled as well. Still.. the original point remains.. if a document is 100 pixels by 100 pixels... then it remains so whether it's set at 1ppi, 10ppi or 100ppi etc..

It's an option to choose which behaviour you want: View -> Dot for Dot

Talroth
05-08-2010, 05:42 PM
At the end of the process, the new image is usually output back to 8 bits per channel, but everywhere in between I'm working in float.

And to expand even more.

Another good reason for expanding to a 'full' sized data type, as in a 32/64 bit float or int is that, provided you have the memory to handle it, the processing of 'full size' data is faster on most hardware than dealing with a 'packed' data type like an 8-bit byte.

Most systems deal with a 32 or 64 bit memory address, and have been steam lined to deal with those. Accessing 'sub data' types usually means reading one part of a full sized memory address, which is an extra step, or more, than dealing with the whole data block.

Redrobes
05-09-2010, 05:23 AM
@Mid: Good info there and I agree with it all.

@Talroth: Technically you are correct only at the purest level on the CPU within the cache and would be true for some processes but in many cases and these cases are ones like images where you have a lot of data the main processing slow down is fetching the data from memory to the CPU in order to get processed. The CPU runs at some ludicrous speed like a few GHz and in most cases processes several things at the same time too but you have to get the data from RAM into the CPU cache which goes over the front side bus and this is very fast but nowhere near as fast as the CPU's clock or processing ability. Therefore keeping things in the cache or getting things into the cache before they are required is the key to getting fast processing. With large images the cache is way too small to hold the image therefore the whole image has the pass through the cache. So in effect the whole image has to go from RAM over the bus into the CPU and back again to the RAM. Because of this the smaller the amount of transfer the faster the processing so in these types of cases smaller memory = faster.

I agree with Mid that float is better than 8 bit so in effect each pixel would be RGBA x float in size which is 16bytes per pixel. With a 10Kx10K image thats now 1.6Gb per layer and if now that has to be paged to HDD then your into crunch time again. I certainly agree tho that it makes a big difference with height maps where 16 bit greyscale is a must.

A float is 4 bytes and a double is 8 bytes. Film studios go for float / double when processing. There is a thing called a half float which is 2 bytes and is a good compromise between giving a lot more range than the 1 byte per component and full float which uses a lot of memory. Not everyone supports the half float format tho.

One more thing to consider which is another curve ball is the streaming SIMD extensions or the SSE. You will see CPUs supporting SSE, SSE2, SSE3 and SSE4. These are done with multiple very large registers - cont remember exactly how big but something like 16bytes and they can process multiple things in one register - like 4 floats or 2 doubles or whatever. So packing more smaller sized components into one register is better than less of bigger size. Piling on the complications also entails saying that most 32 bit apps are compiled without the use of SSE instructions since some processors have them and others don't. Ok you can ask your CPU if you have them and code two or more paths but generally most people other than video encoding codec writers don't do it cos its too much hassle testing all the configurations. However with the x64 instruction set - 64bit apps - SSE 1 through 3 are built into it so you know your getting them. In fact normally you cant turn these off for 64 bit apps. So PS for 64 bit will be using these and that makes it faster as well. I can also add that all intel 64bit CPUs run 32 bit apps in CPU emulation mode which is slower than native 64 bit so its a multiple whammy going 64 bit.

So to sum up for those getting glassy eyed and nerded out, 64 bit is good for memory and speed and the smaller the memory footprint of your image the better but having some extra precision in image manipulation might be necessary. If you do up the precision then its a good idea to take the time to know what this actually means and have an idea about how much precision you need cos you are making a trade off with it. More precision is better for precisions of course but it might kill your performance. The film industry wants quality and wont trade that for performance and would just buy in more compute resource to cope. Weta Digital (who did Avatar / LotR etc) have a combined compute which puts them at about 25th most powerful publicly known compute resource in the world. Us lesser mortals don't have that option so have to work on getting the right balance.

Talroth
05-09-2010, 10:00 AM
@Redrobes: True, there is a huge number of variables that come into play. Likely we shouldn't start talking about the effects of multi-core catch misses and dealing with the standard thread based processing system. And we really shouldn't get into talking about what you can do with floats on a graphics card, with insane hardware optimization for these things.

In the end, the data fetching needs to be done either way, and it can become an issue of whether the processing of the blocked data saves more time than the faster In Processor time.