I rather doubt it. In most cases, either the OS or the applicaiton has to be explicitly written for distributed processing and I highly doubt this is the case here with normal consumer type software.

Though I am no expert in the matter, 3D rendering is one of the few places where you will consistenly find such processing setups and such software is tied to the hardware directly (ie, no OS to speak of) or if standard OS (windows, Mac, *nix, which I doubt there are may such applications) the software is very expensive. As an example, Industrial Light and Magic had hundreds of processors running for 6 months to process the 2500 or so C effects for the Star Wars episode 1 movie. My understanding is that they build the software entirely in house on previous work and had to spend thousands of hours making changes to the code base to accommodate what was needed to make this particular movie.



Even if the OS is capable of handling the distribution of processing off to other machines, I would think that the application would have to be using an appropriate threading model, otherwise, there is no way for the OS to slice up the workload since a thread is the smallest unit of work.

For the most part, if any is done by a standard OS natively I don't know about it. The vast majority of this kind of work is done by writing specialized software that sends some bit of data to a central server that acts as a traffic cop and determines who can do the work before sending on and likewise in the routing of the result back to the originating machine. I highly doubt PS and GIMP were built with such ideas in mind. Even machines that have multiple processes in the same box can only be used if the OS (most can) AND the application was built to understand the multiple processors exist and account for it. Generally, timers must all be configured in the code to run on a single, same processor from what I understand (which is not much) to avoid some type of contention with each other.

Joe