The processing pipeline currently involves calculating a whole image in-memory and then writing this image to disk. FT is 32-bit only, meaning that there is a hard limit on file size (it seems to be around 64 MPixel) imposed by the OS and memory. Implementing larger files sizes would require a restructuring of the pipeline to include overlapped render and write operations as well as a way to let the OS render onto these large images. It shouldn't be an insurmountable problem, but I wouldn't expect to see it implemented any time soon.
If I can find good documentation on the proprietary PSD/PSB file formats then I might be able to include it (the ideal discovery would be a library that writes such files, of course).
Yes.
Interesting feature request. How would you envision the user interface for this working?
I'll throw this on the list of requests, but it's less likely to get attention soon than some of the others.
FT currently implements the "Add View" feature on the View menu that will save the current map projection and center to a list of named views. These view can be restored by selecting from the list.
This feature is already on the request list but hasn't bubbled its way to the top.
This feature is already on the request list, but is not on the immediate work list. You may have noticed that each river segment has an importance attribute (OK, you can't see it, but it's there) that affects the visibility of rivers hbased on zoom level. Only major river sections show up a whole-world zoom with more detailed river segments appearing as the zoom increases.
There are many, many places in FT that have the unfortunate characteristic of using a non-specific integer size in file I/O. Changing to a 64-bit version makes these file I/O operations generate improper results. I don't have a comprehensive test suite for FT, so I"m likely to miss some corner cases. People get pretty vocal when that happens, meaning I'd prefer to avoid it if possible.
The CC2 output from FT is a true vector output. Did you have a different format in mind?
It's a good suggestion, but unlikely to be implemented in the short term.
COuld you describe how you would like to see such a feature operate, especially with regards to what parts get nudged and how a user would indicate those parts?
I don't understand this request.
Could you describe which projections are not operating properly, especially how resolution dependence affects the projection?
A better brush engine is on the feature list. The biggest problem that the brush system currently has in FT is that lack of hardware acceleration. Hardware is finally getting to the point where acceleration of floating-point textures is nearly universal. However, there is a small but vocal minority of people who insist on running the software on machines four or more years old. Maintaining two sets of software for two different paths is expensive and time consuming, especially for a tiny part-time software team.
I'm not sure which version of FT you're running, so I'm not sure if the current implementation of brush presets in the FT beta meets your desires for part B or not.
I don't have any control over either of these items, unfortunately.
I do agree with the general sentiment, but I'm not sure how the user-specified elements would be made to translate to the internal computation system. My biggest problem is that I can't make breaking changes to the calculation algorithms. I can add new or alternate paths, but the existing saved worlds need to keep working from version to version (there was a breaking change required to fix a bug around version 1.23, and it was quite painful for many folks).
This is a pretty difficult operation because it requires both histogram and spatial statistical matching. I'll put it on the list, but don't expect to see anything like this in the near future.
I don't understand this request. Could you provide an example?
Again, I'm not quite sure what you'r asking here. Is the idea that you can reproject an external binary file on-the-fly to allow for a movement of the N/S axis?
Better climate models have been on the wish list since day one. It's a non-trivial problem to solve. I have some code that could work for the problem, but I haven't integrated it into FT at this point. The biggest problem historically was getting the tradeoff between execution speed and fidelity of the results. I think machines are fast enough these days (plus there are some better algorithms that have appeared over the years) that I might be able to get good-enough results. However, there are lots and lots of features on the list that might be ahead of this one.
The effectiveness of the calculations is usually a function of the resolution of the world editing data. Developing scalable algorithms is a fairly complex task in most situations. The continental drift calculation, for example, would provide "scalability" solely in terms of the time step at which the calculations would run. The idea would be that smaller time steps would provide better fidelity. This assumption isn't quite correct, however, because the smaller time steps can result in some otherwise tiny effects to enter into the simulation, potentialy changing the results radically compared to the larger time step. To the best of my knowledge, this effect is true for most discrete-time simulations with random or semi-random components. I have had folks become incensed then they get different results by using a "better" time constant and who then change constants further to make things "better", changing their results yet further, and eventually go yell at my boss about what an incompetent I am because I'm clearly not using the "perfect" equations they provided. Or maybe that was just a one-time bad experience.
The river flow computations that lead to river results have the option to be based on local rainfall. The lake computations are done without the benefit of evaporation. Lakes and rivers in FT don't interact because rivers are a pure slope-based computation and lakes are a pure basin-based computation. A true fluvial computation including evaporation requires a good global computation model. The number of partial differential equations that need to be solved at the same time is quite large, yielding an extremely slow result. Reducing the resolution of computations is possible, but things like rivers and lakes can move around quite a bit with vary resolutions, especially with fluvial erosion tossed in.
Requests are always welcome.