With augmented reality, you can drive any car you want. In this post, we will drive Audi Q7, using horizontal plane detection. This will give more natural effect of driving the car on the floor, instead of floating it around. For simplicity, the car will have only one speed, one meter for one second.
Here’s a video of our end product.
Create a new Augmented Reality project in Xcode and call it ARCar. Download a free car 3D model from somewhere. In my case, I’ve downloaded the Audi Q7 from https://free3d.com/3d-models, but feel free to download and experiment with other cars as well. Please note that you will have to convert the model (if it’s not .dae format), with an online tool or an open source framework for model conversion, such as AssimpKit. In our case, the model is already converted to a scene kit file, and can be found in the final project repo (with link at the end of this post).
When you add your model in Xcode, you might have to specify its textures (if they are given separately as images). You can also play with the colours in the SceneKit editor, especially with the Diffuse colour. Call the scene with the car audi. The root node in the scene should also be called audi.
Now, let’s develop the user interface of the app. It should consist of the standard AR scene view and three buttons at the bottom. The middle button called “I want Audi” will add new car in the scene. The other two buttons, would be for moving the car in the z-axis (back and forward).
Next, it’s time to start with coding. The regular AR setup in the viewDidLoad and viewWillAppear needs to be done. We will also round the buttons for managing the cars, so it looks better.
With the setupScene button, we load an empty scene.scn. We will load the scene with the car in code. We will need to implement the AR scene view delegate as well, so make sure to set that to be the ViewController that was created with the project.
In the runWorldTracking method, we set the plane detection to be horizontal, since we will drive the car only on a horizontal surface. The setup methods for the buttons are just setting the cornerRadius to a value, as well as setting clipsToBounds to true.
Next, let’s see how to implement the scene view delegate methods. We need two methods, the renderer:didAddNode and the renderer:didUpdateNode methods. When a new node needs to be added to the scene, the renderer:didAddNode is called. Every node has a corresponding ARAnchor. When a horizontal plane is detected, the anchor is of type ARPlaneAnchor. That’s why, when that guard at the beginning of this method succeeds, we now that horizontal plane was detected. When this happens, we create a floorNode (we will see it’s implementation a bit later) and add it to the node that will be added to the scene.
When the user is moving the device and scanning the environment, that anchor and node grow bigger and bigger. That’s why we need to create bigger floor node as well. The renderer:DidUpdateNode method will tell us this information. Again, we check if it’s a plane anchor. If it is, we delete all the current nodes added to the parent node (we need now a bigger floor). The bigger node is then create using the createFloor method and added to the node. This gives the visual indication of what you have tracked so far.
Now, let’s see the createFloor method. It takes an anchor, uses its position to create an SCNPlane geometry and creates a node with that geometry. You can play around with the diffusion colour. The position of the floorNode corresponds with the position of the planeAnchor. We also need to rotate the node for 90 degrees, in order for it to be horizontal. You can optionally add physics body to the node, if you want the cars to drop when they are out of the floor (however, you need to add physics to those nodes as well for this). Check out my other blog post to see how you can do this.
Finally, let’s add the IBActions for the three buttons. The most interesting one is the addCar method. Here, we first need to take the current orientation and location of the camera. We do this by taking the transform of the point of view of the scene. It’s a matrix, where the orientation is stored in the third row, and the location is stored in the fourth row. Adding them together gives you the current position of the camera.
After we know this, we load the audi scene we have created at the beginning of this post. We take the root node and set its position to the current position of the camera. Then we add the car node to the scene.
The other two methods, for going front and back, call the same method, runActionForCarNode, with positive or negative distance. Currently, it’s hardcoded to one, but you can extend this, for example overriding the touch events to compute the acceleration of the car.
This method is pretty simple – we are creating an SCNAction and we are moving the car node on the z-axis with the provided distance.
That’s everything we needed to do in order to have basic car driving. You can find the full source code of this project here.
The first sentence of this post was “With augmented reality, you can drive any car you want”. But this doesn’t only apply for these virtual cars, because if you pick up this emerging and super exciting technology on time, you might be able to drive the real ones as well.