by Brendan Hewitt
Global Innovation Director, PHD Global Strategy

It was difficult to avoid the hysteria generated in 2016 when a mobile application encouraged hordes of virtual hunters to gather in public spaces so they might catch the rarest 3D rendered Pokémon creature. 

Indeed, such was the competitive fervour that Pokémon Go! even stopped traffic (and unfortunately worse) in major cities, ostensibly the result of player’s misdirected attention. Perhaps you also chanced upon, or had a younger relative demonstrate, the unique experience of a dancing hotdog on your dining table in 2017. So popular was the meat-based apparition that Snapchat’s first major foray into projection AR itself became its own sharable meta-meme.

While seemingly innocuous, in time, these moments will likely be viewed as the first markers of significant paving toward a widespread acceptance of the technology known as Augmented Reality.

What we refer to as Augmented Reality (also ‘mixed reality’ for some purposes) is no one application of technology, rather it can be anything that overlays an additional layer of data to what we naturally see or hear. In the case of our dancing hotdog, that data can be a fun little distraction. However, the true power of AR will eventually harness much grander capabilities. In a world surrounded by masses of data and digital markers, this true power lies in the ability to make sense of those information points, restructuring and displaying it all in a way that, ideally, makes our lives easier. If there is one certainty to the technology, it’s that it will have a transformative effect on the way we interact with the world.

Making unique and useful sense of this data however, is reliant on the proficiencies of AR’s Artificial Intelligence cohort, itself undergoing a rapid and diverse acceleration with developments in machine learning. In addition, for us to properly interface with AR, we require marked progressions in the hardware space. The momentum of these three elements are inextricably linked to AR’s eventual destiny.

For that reason, 2018 will see AR quietly making plans for its transformative charge, however, it will not be AR’s breakout year – if such an event exists. More likely, the technology will realise its potential in a more organic manner, gradually subsuming responsibilities within the digital ecosystem to a point where its presence is ubiquitous.

Nonetheless, be sure that we will encounter accelerated progress within the bounds of mobile-facilitated AR, the focus here. Recent developments have indeed democratised elements of the technology – providing brands with the near-unhindered opportunity to jump into the still-nascent space. While I do believe the AR we encounter in the coming year will largely remain in the ‘for fun’ camp, these recent developments will bootstrap collective discovery and understanding of the technology, and in doing so, we will begin to properly unpack both its immediate and near future opportunities.

It would be unfair not to credit Snapchat’s role in 2017. So far, the platform has been the most pioneering in the consumer-facing AR run. However, the actions with the biggest potential occurred elsewhere and unsurprisingly, two tech behemoths were behind them. For Apple, its move was two-pronged. The iPhone X release came with an upgraded A11 chip; processing power designed specifically for greater rendering of AR imagery. This was both a recognition of its necessity and a strong indication of the company’s focus. Clear is this intent considering the X’s announcement followed that of the Apple AR Kit, a developer platform that places impressive AR design in the hands of virtually anyone that wants it.

Google wasn’t going to be left behind either. Having previously paved the way with its Tango platform, Google released ARCore, a similar proposition with arguably even wider potential application. At time of writing, ARCore also has the ability to work within mobile browsing, freeing up opportunities outside the standalone app space. Both developer platforms are designed to make use of the phone’s camera capabilities for your AR ‘canvas’. They harness motion and face tracking, environmental understanding and lighting estimations to render projections within the chosen app or browser.

Apple and Google have made smart moves here. For the past half-decade, AR has encountered a somewhat spluttering gestation, resultantly, a proper and easily adopted use-case has struggled to acquire groundswell. By opening the tools of creation to develop on their existing hardware assets, the tech titans have effectively crowd-sourced what could be the next digital breakthrough, with unprecedented scale. We can think of it as a ground-up movement, something Jason Tanz of Wired calls ‘the Inductive Theory of Platform Development’. The grand idea will not trickle down into a product, rather, small solutions will expand to become grand ideas1. During this period of collective discovery, you can be sure both parties will spend any spare time cracking the so-far unfulfilled hardware solution too. If its not them, it will be another. With overall spending on AR technology set to hit $60 billion by 20202, it might come sooner than we think.

As with explorations in 2017, brands will undoubtedly embrace the opportunity here – rendering distinctive brand assets and personalised messaging in an effort to trigger category entry or build brand affinity. On the proviso that these exhibit a unique and relevant functionality, such executions will build on the brand lens-frenzy of last year, likely generating comparably greater interaction and sharing with the removal of facial-cue requirement and the potential to be experienced outside a singular app. The Film, Gaming, FMCG and QSR verticals will remain the biggest use-case here for obvious reasons, though I expect a greater proportion of executions will add a post-purchase value through usage illustrations or overlaid product features; a utility now more easily facilitated than it was a year ago.

This more utilitarian element to AR will also make its own slow but sure strides. This will be particulary true of the location and Marker based functionality of the technology. While it may enable you to find your friends at a crowded festival, here, the retail environment is the natural benefactor.

Homeware brands such as IKEA and Wayfair already have AR powered shopping experiences in market, allowing customers to visualise in their own homes, to size, that new dresser or dining table. Expect similar providers or big-box retailers to release their own iterations of this functionality in 2018. The same can be said for beauty brands and retailers. The Paris-Headquarted Sephora harnesses AR, both in selected retail stores and within a standalone app, allowing a user to try a specific shade of makeup to determine suitability. Both are examples of Marker based software pinpointing spatial or facial cues to overlay information and fill ‘the imagination gap’. Combine such examples with navigational functionality, personalised messaging and dynamic pricing, and its clear physical retail has some fight in it yet. These elements are set to become unique differentiators in large or flagship locations in 2018.

On the surface, the AR experience will not appear revolutionary in 2018. However, it will be what occurs beneath the hood that will bear the most consequence. This tinkering and experimentation will yield accelerated returns on execution, while machine learning and location-mapping advancements continue to chip away at the barriers to meaningful use-cases. If we throw in the continued development of Google Lens and its nascent visual search capabilities, it becomes apparent that a whole new communications paradigm is on the horizon – set to completely disrupt our current notions of consumer marketing. In the meantime, it’s important to continue building capabilities with the tools available in the moment, lest we be found flat-footed when the perfect brand or service use-case does arrive – that’s probably a reality worth avoiding.

1 Wired Magazine: Dec 17/Jan 18

This article formed part of ‘PHD Perspectives’, click this link to read the full publication