View RSS Feed

Redrobes blog about playing with the technical side of graphics.

Stitching points into a model using MeshLab

Rate this Entry
Ok so we have the points, lets make a mesh - a polygon model of the building. In MeshLab clean up some of the stray points. You can do that by selecting points and deleting them. The selection icon is the one with three dots and the cursor arrow. Click that and drag out a box over some points and they turn red to show they are selected. The delete is on the far right which is the three dots and a triangle with a big X through it.

Once you have the vertices for the building without too many stray ones the first job you need to do is get the normals for the vertices. Now a normal for a point is a bit of a weird thing but it looks at close points and determines from them which way to point it. If you also switch on the lighting using the icon looking like a light bulb then it colors them from black to white depending on the view direction and normal.

So click on the menu item for Filters and then Point Set and then select the "Compute Normals for Point Sets". It will bring up a small dialog showing how many neighbors you want. 10 or 20 seem to give reasonable results. I guess it depends on how dense and clean the point set it. The cleaner it is the less neighbors you need. The more dense it is the more neighbors you might need to get a good average. Anyway - its all a non specific number you have to guess.

Once you have the normals computed then we can make a surface from it. You can select the same menu and this time use the "Surface Reconstruction: Poisson". This time there are a few boxes to fill in. Most you don't need. The one that matters is the top one "Octree Depth". Set this to a higher number like 10. It goes up to 11 but it crashes it often on 11. 10 is safer and it represents the resolution to use. So the more the better.

Apply that and close the dialog once finished. The main screen looks like its added some more points to the screen which it has. You need to open the "Show Layer Dialog" which is the 5th icon in and looks like a set of square planes stacked up on top of one another. When selected it opens a panel on the right showing your original object and the new one underneath called "Poisson Mesh".

Select the eye icon on the original model to close the eye and hide it. Then select the "Smooth Shade" icon which is the cylinder with no facets on the menu bar icons.

The result includes loads of extra polygons to make the whole thing a solid but basically looks like this. This is an amazing result which looks terrible. Its because the source set of points were so sparce. We need a better tool to generate a dense set of points. But the principle is that from mere digital photos you can generate the 3D model of the thing in the images. I.e. no 3D modelling skills required !

From this model you can save it as an OBJ type model and import it into blender etc.



So this is a result but not a good one. I said at the start that were still waiting for the tools to generate that dense set of points. If we had it then it would look better. As I have mentioned in the past I have a 3D object laser scanner which I use to generate dense clouds of vertices. If the photos produced dense sets to the same quantity and accuracy of laser scanners then you can get better resulting objects. This pic below is MeshLab doing exactly the same process on a laser scan of a head model. This looks much better. One day we will be able to do this with terrain and building from photos. Its just a matter of when...

Updated 02-04-2011 at 09:26 AM by Redrobes

Categories
Technical

Comments

  1. su_liam's Avatar
    The actual ridges look pretty good. This is fairly normal with professional TIN work: put a lot of points in where you expect a lot of detail, such as ridgelines, and be pretty sparse on relatively flat areas.

    One thing that might help would be to put more points out beyond the area of interest ad crop the result down. Unless you have a closed solid the edges are always going to be a problem area. The algorithm just doesn't know where things are going beyond the edge of your data. The center area of your mesh thus looks pretty good, so I think yez gots the right idea.