Augmented reality in Unity, Part 2: Lessons Learned


After approximately 560 hours of development (isn’t that 6 and a half weeks of 12 hour days? Yes, yes it was), Channel TWo (myself and @jwestbrook1) has launched two AR apps.

polyCopRiotNode is available for iOS and Android, the links are in the description here. The project description is here. There are three nodes in Chicago, you can find them on the Google map. For those interested in the technical details for Unity, I posted some about them here. The app was released for a show in Portland but contains nodes for past shows. We previously used WebAR (for Expose, Intervene, Occupy) and Layar (for Cyber In Securities)

The project uses Android and iOS device GPS (or, as I found out, for lower priced wireless-only iPads, fake GPS based on wireless access points) to check whether users are within a geo-fence of a block or so of the points we selected. In the process of developing this, we also developed a way to map lat, lon to Unity x, z space (if, like me, you never really considered that lat/lon is a grid on a sphere, it’s a lot to take in).

The other project is not GPS based and currently only available on Android. It’s also on display in Chicago for the glitchicago show @ Ukrainian Institute of Modern Art. You can download it on Google Play.

Lessons learned:

Gyroscope + compass for 360’ AR with a geo-located point is not ready for prime time (e.g. you can walk around a 3D model that is at a specific point and see all the sides). I have somewhat working code for this on Android but it really glitchy, even with easing. It did not work at all for our iPad, unless you want to make people vomit, in which case it works well.

Major lesson, for me: think about what you are testing and how your expectations might effect this. Related to the above, we had a cop model standing at our apartment. We took the device out and walked around the neighborhood to see it from all sides. Each time, we held the device up in the direction of our apartment and turned the device back on. It worked, beautifully. What I had not considered is that the gyroscope orientation is based upon where you are facing when you turn on the device. A day later I tried it in another spot but was sitting in the car and not facing in the direction of the model but the model was right in front of me anyway.

Vuforia’s AR camera works well. I played a bit with their image tag recognition but, as our project is GPS AR, we’re only using the camera because it seems to deal with automatic scaling/sizing for any device camera. I spent about a week writing my own AR camera and testing with only a few devices, it seems like it would be a major headache.

Quaternions are really complex. Unity’s description for working with them is this (and only this):

float x;
X component of the Quaternion. Don’t modify this directly unless you know quaternions inside out.

Google Play had our app available the same day we uploaded it. The (iOS) App Store had it available…two weeks later, after I had to revise it with a backdoor for them to test it.

It takes a couple minutes to compile and run for Android from Unity, straight upload to the device. It takes much longer to compile for iOS, so long that you have to compile, open in Xcode, compile and upload to the device.

Completing, uploading and publishing to Google Play store is incredibly easy. If you’re interested in making apps and only have an iOS device, get an Android device. Completing and uploading an app for Apple’s App Store is a complete nightmare of permissions and profiles. If you are interested in making apps for iOS, make sure you’re dedicated, very dedicated. Even just getting a device set up for iOS to develop and debug on is pretty ridiculous.

Neither Apple nor Google list “art” as a category for your app. :frowning:

Anyway, we have another nearly finished AR app to launch soon and will be exploring more with: gyroscope + compass orientation, geo-fencing, image recognition. It will continue in Unity for now because, having looked into AR on oF and Cinder over the last week, I can’t see how two people could develop something like this for both iOS and Android without the Unity framework.

If anyone has questions on any of this (most of it applies to any GPS AR on any platform, not just Unity), feel free to ask. AR is a really exciting medium but it’s fairly daunting, I’m glad to answer any level of question (if I can).


Hi AtrowBridge !

First I’d like to say, When i read this article, I felt like you read my mind.
Second, I hope you’ll understand me, I’m French and my english is a little bit lame.

I started a project where I want to display information on real world places.

For now, I have a script that transcript GPS coordinates into Unity coords. It works well.
I have a data base with places and infos.
I put an AR Camera Vuforia. (Since they let a free option). For now, i kinda dont know what to do with it. but that’s not a worry.(I guess).

My concern is, like yours at the time of the article : How to properly calibrate device orientation when the app starts. I tried a few things but the result is poor.
And since the app is a public one, I dont want the user to calibrate the gyro himself.
Can you please tell me more about how you achieved this automatic calibration ?

In advance Thank you mate. Good article