PDA

View Full Version : Boosting the poly count in games by a factor of 100,000



ravells
08-03-2011, 09:17 AM
I hope this is real....we are in for a treat if it is!

http://kotaku.com/5826788/could-this-graphics-tech-revolutionize-the-way-video-games-look

Hai-Etlik
08-03-2011, 09:35 AM
Two things come to mind, assuming it's true. Where are we going to STORE all that data, and how do you animate it.

NeonKnight
08-03-2011, 11:31 AM
Watch this Vid around the 6 minute mark. Talk about algorithmns and such

http://www.youtube.com/watch?v=Q-ATtrImCx4

tilt
08-03-2011, 11:40 AM
yep.. looks really cool - now if one could just build things by thinking of it instead of using those pesky 3d programs *lol*

Midgardsormr
08-03-2011, 11:57 AM
If it can be rendered in real-time, then the size of the data must be manageable—it still has to be pushed through the limited RAM of the computer, so clearly they aren't doing something like using Cartesian coordinates for each point in the cloud, like polygonal models currently use.

As for animation, from the sound of it, the artists' workflow isn't going to change—they'll still create a polygonal model, rig, and animate it, but they won't have the same polygon count restrictions that games normally have. It looks like the conversion to "atoms" occurs just prior to render, but after all of the animation is done.

The big issue with getting this technology out into the world is that rendering engines will have to be rebuilt from the ground up, and that's going to take some time.

I expect, though, that where we'll see some big impact from this tech relatively soon is in particle effects and fluid simulations. Of course, without knowing exactly what's "under the hood," it's hard to know exactly what's going to become possible.

edit: thanks for that other video, NK; it changes the way I was thinking about things, and it proves that I need to type faster, since there were two replies before I finished mine!
So since it's essentially a level-of-detail trick, nothing changes for dynamics simulations—you still can't have millions of points without the associated calculations slowdowns.
And storage of the data may, indeed, be a problem. I don't fancy having to buy and store a terabyte hard drive for each game I play in the future!

tilt
08-03-2011, 12:34 PM
well, it probably would be a terrabyte flashdrive instead.. with the game pre-installed - plug and PLAY ;)

Hai-Etlik
08-03-2011, 10:23 PM
So it sounds like it's essentially a form of spacial indexing. That just amplifies the problems of space and animation. You not only need space to store each point, you need space to store the index. For stock animation you need a 4 dimensional point cloud and index, for generated animation you need to generate the points and compute the index with each frame.

Now I wouldn't have thought it would be practical to make a spatial index that supports such quick perspective based lookups either so it may be they do have solutions to these problems, but until I get some independent reports including data on storage and animation, I'm going to be skeptical.

Moe
08-03-2011, 10:49 PM
Refering to the video's explanation the system will only display as many points as the resolution affords. So talking about an resolution of e.g. 1280x1024 this would be ~1.3 million points per frame (or more with an higher res). Without question, this is possible.
If the algorithm they are using is able to convert classic polygon models into cloud data in real time, bringing them onto the screen or uses any system which generates them out of data of less size than the actual storing of all those points on a hard drive would afford - it could be possible. I am thinking of that one scene where the camera flies around this statue of an elephant, there he's saying they are currently working on speeding up the fps, which might be a hint, that the algorithm produces those visible points while searching the index you mentioned before in real time.

So what I am trying to say is: Perhaps the overall size of the data storred on youd HDD isn't that exorbitant - it is their algorithm generating the unlimited detail environment shooing the points through your RAM.
Correct me if I am wrong as I am thinking about this with my limited logical imagination - this system is uncoupled from any former approach.

Lastly, if all this is true and managable - this will push a lot of things to a next level.

>Moe

Hai-Etlik
08-03-2011, 11:02 PM
Converting polygon data on the fly would be pointless as that in itself couldn't really be any faster or more detailed than existing polygon scanline rasterization techniques, and you would still need to store and index all that data before you could search it. The conversion algorithm would have to be magic to be able to take a mesh and turn it into a point cloud with more detail than the mesh, or to make the conversion.

Also the description in the video was that the point cloud data is what is being stored right down to the level of what would ship with the game. That's what all the stuff about scanning in rocks was about. The rock exists as point cloud data, not a mesh that is converted to a point cloud.

Moe
08-03-2011, 11:10 PM
Mhm, I see.

I guess I better throw my thoughts overboard as they are not based on proper knowledge ;-)
Sounded clear to me as you said it, thanks!

>Moe

Redrobes
08-04-2011, 06:26 AM
Its not a con and it is real but this technique has some major downsides which is not being highlighted. You will notice that nothing moves in the video. Its a bit complicated to explain why that is but unless that is something that can be overcome then its always going to be a problem.

What this basically is, is a procedural 3D scene. Its exactly the same as my viewingdale but in 3D instead of 2D. Its a hierarchical array of objects with a fast traversal. As you zoom into an object it loads more layers of it. In the traditional way it would be polygons scaled up in vector but this technique is voxel based so that you can load in more and more layers of an octtree structure just like I would load in more and more smaller and smaller layers of quads in my map. In the same sense that if I change one icon at one scale level then it changes all the icons in the whole map since they are being reused. This is a problem for this scene generator since it means that you cant have local changes to one tree in that scene without affect them all or you have to store a cache of individual tree adjustments. That cache would be large and cumbersome for a detailed scene such as those shown because of the many layers of potential detail that you would need to store for each one. So when I see lots of animated stuff going on in a scene with that infinite detail I'll know its worth looking at.

Theres some discussion about it on the Outerra boards too since he is doing a 3D world terrain generator and has a similar requirement. But he is not doing it the same way but by using wavelets. You should look at his vids as well as they are mighty impressive esp when you zoom right down to centimeter scale and its still got detail in the ground terrain. YouTube some outerra vids. In a sense tho, even with his way there are still problems with animation since you need to know all of the wavelets before you start rendering. Its taking several hours to compute those but once you have them its fast to render. Another similar rendering style is radiosity where you can pre compute all the radiation levels in a scene which take ages and then render it from any view fast. Move something tho and all the radiation factors have to be redone once more.