Back to Articles
Share this Article

AR for non techies a brief overview of the state of the technology 2021

Types of content

Any digital asset can be delivered in AR.  An image, model, film or audio. The limitations are based on where the content assets are stored either in an app or in the cloud.  This relates to what experience we want to give the user - will they wait for a download or do they want immediacy? Think of this like the old school annoying buffering on YouTube.

Models and images

Content assets such as images and models can be brought to life in UNITY which is a world leading AAA game engine.  Understandably what can be done with content is only limited by the imagination of the creation team and the technical skills of the UNITY team.

Audio and video

Cloud based audio and video can be streamed to any AR experience.  This does infer a stable wifi or mobile data connection.  

Audio can be Dolby, 3d, asmar, or binaural in nature with the limitation that the user's device needs to be able to decode and produce the final audio.  Positional audio is really easy as sound output can be attributed to an AR object and will sound as though it comes from that object's position.



Typically videos tend to be “green screened” to give the impression that the objects in the video such as a CEO, Presenter, Goblin, Dinosaur, Bart Simpson, Tiger are actually in the local real world space.


Content can reach out to other cloud based resources such as AI.  AI is typically used for digital customer assistants or some form of language processing and has not appeared in many AR experiences.  

The other use of AI is more R&D oriented and involves the use of image manipulation or identification such as Google Lens or State of the Art type work exemplified in YouTube channels such as Two Minute Papers.


There are two types of AR device users own mobiles and dedicated glasses/headsets or AR glasses


Users own mobiles and tablets

These give a magic window experience in which the AR world is placed inside the video feed displayed on the device.  Most medium  to high end mobile devices support AR although not every android phone or tablet does.

Dedicated glasses/headsets such as the Hololens, Magic Leap, Vuzix, Nreal

Movement is more natural as it is the users head direction that is tracked but these offer a similar magic window experience as the AR experience is seen through an limited sized invisible virtual screen.  

Also unfortunately the actual limits of the projection technology used (caused by the need for the AR experience to be reflected to the eye from the inside of the glasses) mean that the displayed experience has to be of a higher brightness and contrast to the real world background robbing much of the mix of realities effect.

What you get sold

What you actually get

Placement in the real world

AR Content is placed in the real world based on Anchors.  Anchors are created by the AR application and are a tie from the AR Content linking it with either:

  1. Target images
  2. Planes such as floors and tables
  3. GPS positions (this is typically very loose as mobile phones are crap at accurate gps 6m or more out)

The better the anchor and the higher the number of anchors - eg: the more complex the image or more identified the plane or solid the gps location is - the more stable and realistic the AR experience is.

Shared AR

Is based on sharing the location of the first user device and then the location of the anchors that the first user's device has set out in the real world with each new user.  The reliability of this is dependent on the quality of the target images and how well planes have been detected and the local real world mapped by the first device.

This is the reason why cooperative AR experiences tend to be indoor and either target image based or placed on a detected plane (tabletop, floor, wall) by the first user.

GPS is so flakey is the reason why there are few shared AR experiences and games such as Pokemon GO are outside not inside.  Sharing flaky anchors will just mean an even worse experience for the other users.

Delivering AR Experiences

AR experiences are delivered in app, via an AR Platform or via WebAR.  None of these platforms allow accurate geolocation due to the underlying limitations of GPS; this is an inescapable limitation of the industry at the moment. Even Niantic with Google have not cracked this.  However each has its own set of underlying advantages and disadvantages.

Specific AR Apps

Specific AR apps currently have a number of advantages.  They excel at real world positioning and scanning.  This allows AR experiences to be shared easily and be of a much higher quality.  Assets can be built into the app itself which means that AR experiences download faster reducing wait time of the user journey.

The disadvantages are that you need two of them, one for Android and one for Apple.  These need to be maintained and supported on the app stores.  You also need to direct users to download the app either at the point of engagement or before which gives a clunky user experience.  Finally extra content will need to be hosted somewhere and that hosting will need to be maintained and supported.

AR Platforms

These are third party companies like the 6thWall and Zappar that provide the infrastructure to deliver AR experiences.  Often they have their own content creation platforms, providing hosting and analytics.  However most need a user to download their app to launch content.

AR Platforms have a number of advantages in that they take a lot of the donkey work out of the logistics of providing an AR experience to a wide audience.  The platforms are device agnostic which means you only develop your AR experience once and the final experience is available across the largest audience.

The major disadvantage is that the user again has to download an AR Platform specific app.  This brings into question branding as that can be confused with AR platforms own branding because the AR experience is seen through their app, although this can be whitelabled at a cost.  Finally none of the content creation tools can compete with UNITY although Zappar does have UNITY integration built into its offering.


Essentially this is a web page coded in such a way to mimic the functionality offered by a dedicated AR app.  This has the major advantages of being able to run without the need to download an app and only needs to be built once to be accessible across both IOS and Android.

The major disadvantage is world space tracking as web based cannot compete as code running in browsers cannot yet run fast enough to be as accurate as the other platforms.  This means it is ideal for experiences that use target based anchors but not anything else as the tracking of the virtual object is flakey at best causing the virtual object to “move” in the real world space as the anchor points change.

Comments ({{count}})
Replies: {{comment.comments_count}}
There are currently no comments. Be the first to comment on this article
Load more +

Want to leave a Comment? Register now.

Are you sure you wish to delete this comment?