Before we begin, Apple has provided some sample images to help us get started quickly with creating our 3D model. We can also use these images as a guide to understanding how our pictures should look like.
If you would like to start by taking your own pictures, you can read up on Using the CaptureSample App section below.

For this article, I will be using the Nike Airforce 1 images Apple provided. You are free to use any image you like.
Now that we have downloaded the sample code from the link above, we can now run the CaptureSample App on our iPhone (if you are having difficulties figuring that out, check out this video).
This sample code app will help us take clear images that convey our object’s depth and gravity for the 3D model by summarizing some of the best practice recommendations for getting a decent capture and approving a photo with a green checkmark.
Depending on the object, 20 to 200 images should be enough to get a good 3D model. You can also use the built-in camera app on your iPhone to take the images. All you need to do is make sure your object is exposed to enough light and is placed on a dark cloth or an uncluttered background. For more information on taking good images, you can read this article by Apple.
Now that we have taken a photograph of all angles of the object, we now need to get the pictures over to our MacBook where the magic happens. To access the photographs taken from the CaptureSample app, navigate to the file app on your iPhone, make sure you are “On My iPhone”, click on the CaptureSample
folder, then the Captures
folder, and you should see the folder for the images you just took.
Now that we have the folders with the pictures on our MacBook, It is time to bring them into the HelloPhotogrammetry App to make a 3D model.
When you launch the program, the first thing you need to do is click on the top-level project and make sure your team is set, or else you will get an error when attempting to execute the program.
Now that you have that done, you can now make your way to the file ‘main’
Scroll to the bottom of the file main
, and you should see these lines of code
Inside the HelloPhotogrammetry.main() function is where you will be passing in an array of strings that consist of the parameters needed to run the program successfully. The string arguments that we will be passing in are:
- Input folder: The local input folder of images you will be converting.
- Output folder: The full path to a USDZ output file for the 3D model.
- Detail: The detail of the output model in terms of mesh size and texture size. Available detail options include preview, reduced, medium, full, and raw.
- Sample ordering: Setting to sequential may speed up computation if images are captured in a spatially sequential pattern. Available sample ordering options include unordered and sequential.
- Feature sensitivity: Set to high if the scanned object does not contain a lot of discernible structures, edges, or textures. Available feature sensitivity options include normal and high.
After understanding what the string parameters are, you can now complete the function.
Note: The program will automatically create the output folder provided.
We can now run the program and the magic of converting your images to a 3D model will begin. This may take a few minutes, depending on the specification of your MacBook, the number of images you took, and the specified detail you placed.
When it is done you should see a text in the output terminal saying:
You can now access your 3D model in the output folder you specified in the “HelloPhotogrammetry.main” function. In my case, I will be finding my 3D model in “/Users/eyimofeoladipo/Desktop/airforce_medium
“.
Here is a gif of my wonderful 3D model.
Now that you have learned how to create a 3D model using the Photogrammetry and Object Capture API on the MacBook, you can now integrate the sample code apps with your own personal app. Have fun!