Creating 3D models for AR with photogrammetry


One of the challenges with AR is definitely the creation of 3D model representations of the real objects. This is needed in order for us to be able to present these objects virtually in the user’s context, before they buy it. Whether that’s furniture, paintings, household items or something else, there are several challenges in this process. Solving the technical problems is one thing, doing it cheap and fast is another. There are several approaches to do this conversion of physical objects into 3D models. First, you can draw the items from scratch, in software like Maya. Second, you can use 3D scanners, which might be pretty expensive. And third, you can use a technique called photogrammetry, which is creating 3D models from many pictures, taken from different angles.

In this post, I will create a clone of myself, using photogrammetry. The inspiration came from my upcoming trip to Morocco next week. Morocco always reminds of O Clone, a popular TV series in the beginning of the millennium. Here’s my clone:


Photogrammetry is the science of making measurements from photos. The photogrammetry software tries to find matching points between different images. This means you need to take a lot of overlapping images. Then, when the photos are aligned (places from where the photo is taken are determined), a sparse point cloud model is created. Next, you can create a dense point cloud and a mesh. Mesh is a bunch of triangles that determine the shape of an object. After the mesh is created, the photogrammetry software can create textures, which are images that determine the colours and patterns of an object. When this process is completed, you have the option to export the model in one of the several popular 3D model formats. Different softwares have different supported formats.

Photogrammetry software

Photogrammetry algorithms are really complicated. There are a lot of photogrammetry programs, which you can use to help you with the model creation. Some of them are free, and some of the paid one have free trials, which is more than enough to check their quality.


My first go was with Colmap, which is free and pretty popular photogrammetry software. Unfortunately, the dense point cloud creation requires CUDA from NVIDIA, which is not supported on the Mac. Creation of sparse point cloud was as far as I could get. Until I get my hands on a Windows machine with a good GPU, I can’t say more about this software.


My second try was with Regard3D. It’s free and it’s really simple to use. For creating a good model, it requires taking some pro images and I wasn’t able to create something meaningful with my iPhone. The creation of the castle mentioned in the tutorial, works pretty smoothly. You can export the model to obj, which is supported on iOS. You also get texture images, which you can specify for the materials of the model. There’s a mtl file with the model, which tells you which texture image should be put where. Xcode automatically recognizes the different materials and although it requires some more manual work, it’s still fairly easy to integrate the model into an iOS app.

Agisoft PhotoScan

Next, I wanted to try with a free trial of a paid software. Agisoft PhotoScan seemed like the one I need. This software has a lot of features and one of those is the creation of 3D models. It’s available on both Windows and Mac. Before using the software, you need to activate your trial version with a key you get after an email registration.

The software is very easy to use. The first time you can follow along with the tutorial provided, since it gives some useful info about the parameters you need to set during the flow.

The first step is to import a lot of images, from different angles of the object. After the import, it’s recommended to mask the not important parts of the image, allowing the software to focus on the object that needs to be extracted from the images. There are intelligent scissors that let you perform this rather tedious task on all of your dataset images.

Screen Shot 2018-11-18 at 19.00.27.png

After masking the photos, you start the photogrammetry flow, which I recommend to follow from the tutorial provided. In short, first you start the align of photos, in Workflow -> Align photos. This creates the sparse point cloud. Next, you create the dense point cloud in Workflow -> Build Dense cloud. If you choose high quality here, this process might take a little longer (even an hour). Next, you need to create the mesh, with you guessed it, Workflow -> Build Mesh. The last step is creating the texture, by going to Workflow -> Build Texture.

Next, you need to export the model. For iOS and SceneKit, the recommended file format is collada (.dae). This will create the .dae file, along with a jpg or png texture. Next, you drag and drop the model and the texture in Xcode and convert it to scn format, by going to Xcode -> Editor -> Convert to SceneKit format. The model should look like this:

Screen Shot 2018-11-18 at 19.09.59.png

It’s possible that the model is not at the world origin. In this case, you can adjust the position of the model in the Node attributes inspector or correct the origin in another software, such as MeshLab.

If you have forgotten to take images from some angles, you might have holes in the model, like this big hole at the back of my head.

Screen Shot 2018-11-18 at 19.14.49.png

Showing the model in an iOS app is pretty easy from this moment on, you can check the AR tutorials in my categories page for more details.


The quality of the model is solid. Since the photos are taken with an iPhone and in a regular environment (without any special lighting), the software does a good job. However, there are cases where photogrammetry is not a good option. For example, for reflective or shiny objects,  it doesn’t work good. It also doesn’t detect lighter objects with smooth surface, for example a dish. However, I must say that I’m surprised that my white shirt (that was no coincidence) was detected correctly. Objects with a lot of patterns and different colours are a good choice for photogrammetry.

To sum up, photogrammetry is very interesting technology and it will be used a lot for creating 3D models for augmented reality. Probably all the different approaches mentioned above (scanners, manual drawing and photogrammetry), would be used depending on the type of the object.

What are your thoughts about photogrammetry? Do you have experience in creating 3D models out of real objects? Comment in the section below.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s