Google IO 2017 just wrapped up, with attendees both live and virtual buzzing over the many possibilities laid out by the internet giant. One blog post can’t contain the vast depth of knowledge and innovation across the many facets of Google, but retailers trying to read the tea leaves for what the future of the internet may hold for ecommerce should focus on a few key trends:
- The pending transition of focus from mobile-first to artificial intelligence (AI)-first
- The complexities and possibilities of multi-modal interactions
- The rise of the next billion mobile users
These three trends aren’t specifically new to 2017, given the introduction of Google Home, Progressive Web Apps, Accelerated Mobile Pages, and TensorFlow as part to the 2016 edition of Google IO. What was different from previous IO conferences was Google’s shift from announcing large, early-stage product ideas to showcasing the investment in current technologies.
Overall, the perception was that the technologies and solutions that were focused on at Google IO was deeply user-centric, a welcome evolution from past launches like Google Wave, Google+, and Android Instant Apps. This is reason to believe that the investment from Google and the wider community in projects like Accelerated Mobile Pages (AMP), Progressive Web Apps (PWA), TensorFlow, Google Assistant, and Chrome will make a long-lasting impact in shaping next-generation interactions.
Let’s dive into each trend and see what to expect now and in the future.
The pending transition of focus from mobile-first to AI-first
Google CEO Sundar Pichai flashed up the core transition slide in his opening keynote: “Mobile-first to AI-first”. The underlying push across all initiatives was the deep prevalence of AI and machine learning. Unlike other industry initiatives like IBM Watson, Salesforce Einstein or Adobe Sensei that have labelled and productized these capabilities, Google has chosen to weave them throughout the entire web ecosystem as their bet, dog-fooding their AI platform along the way.
The best way to see the significance of the major investment Google has made in AI is the introduction of Tensor Processing Units (TPUs) as part of their AI data centers. These TPUS are dedicated cloud environments meant to handle the demanding and unique computations required to train AI.
Google has applied their investment in AI by making everyday technologies even more amazing. This is a great thing for all users. A prime example of this is the expansion of the seemingly ho-hum Google Photos. Yes, a photo organizing app made center stage, as a prime example of how focusing on the user, and not the technology has made a subtle but welcome shift. Google is leveraging AI within Photos to auto-select the best photos out of the hundreds we take now with mobile phones, and then uses facial recognition and AI to automatically share any photos with your friends and family. So no longer is there a need to take the same photo on 5 different devices, send large dropboxes of photos, or worse forget to share all the wonderful baby photos with the family. The tedious nature of photos is handled by AI, letting the user snap away and forget about following up.
Another great example is Google Home, which has been opened up via Google Assistant being made available on iOS and expanded to take in text input. Actions on Google have been opened up to allow third-party integrations, including the ability to place transactions for purchases like food delivery or perishable reorders. The key to opening up the feasibility of both voice and text Assistants to start making a shift in consumer behavior is the miraculous reduction of natural language processing (NLP) error rate down to 4.9%. This means Google’s AI for NLP has become better than humans. Fundamentally passing this milestone means the greatest frustration in overcoming voice interactions and conversational interactions is achievable in the immediate future.
Finally, the potential of augmented reality (AR) is being realized by the introduction of Google Lens, a new capability that lets any Android Apps leverage AI for image recognition. From a retailer perspective, this could drive a change with in-store technologies to no longer rely on hard-to-manage systems like in-store wifi and iBeacons, but to shift to using real-time image recognition via visual positioning services for in-store location detection and pathfinding.
The complexities and possibilities of multimodal interactions
This is the year that Google has focused on adding voice and vision as new modalities to design for. Retailers have talked about the complexity of the shopper journey, given the introduction of mobile moments, where the consideration of touch capabilities, movement, privacy and context complicated all aspects of ecommerce. With voice and chat interactions based on Google Assistant (the underpinning technology of the Google Home competitor to Amazon Echo) and Google Lens providing AI-driven augmented reality technology on every device, we’re at a point where the promise of apps that are multi-modal aware can unlock a much better user experience.
For example, the failure of ecommerce to penetrate the living room via TV was deeply limited by technology that didn’t respect the static, output-centric, and poor input nature of this room. Porting a mobile app to an Apple TV didn’t translate to tangible engagement or ROI. However, imagine a future where you can ask Google Assistant (via Google Home or your phone) to find a tie to match your current outfit. Assistant can tap into a nearby Nest cam to analyze your current attire, or ask a set of questions via voice if camera input is unavailable. Google Assitant can then stream suggested products to the TV screen to take advantage the plethora of pixels, let you rotate the large 3D image via Tango on your remote, and then proceed to send to your phone a map to the nearest store with inventory, all in time for you to make your previously scheduled appointment.
Sounds a bit too futuristic? The underpinnings are here this year with the advances showed off at Google IO. Retailers who can find the discipline to test and iterate the infinite combinations of such technologies are going to put the best user experience forward to differentiate.
What will impact the ability for these multi-modal interactions to be effective will be based on the adoption of new capabilities covering:
- Streamlined sign-up and sign-in flows, where the Web will add Identity as a first-class citizen. Upcoming improvements in Chrome 60/61 will unlock an incredibly easy flow via Smart Lock (Credentials Management API) to avoid logins, while long-term plans for WebAuthN will unlock the ability use biometrics like fingerprint scanners beyond native apps.
- Streamlined payment flows via Payment Requests and Google Payments, where websites can leverage any native Payment App (such as the newly announced Alipay integration) for paying on the web in one click. This holds promise since Google has found 80% of checkouts contain only 1 product.
- Google Assistant transactions, where an intelligent assistant (note the difference from a chatbot that represents a virtual agent for a retailer) will facilitate orders over natural language with any retailers who participate in Actions with Google. The assistant nature may be the key to switch the paradigm away from the current reputation of chatbots.
The rise of the next billion mobile users
Technology companies from India, Southeast Asia and Africa were heavily featured, pushing the limits of mobile based on the constraints of slow devices and slow networks. As Google expects the rise of the next billion mobile users from locations such as Nigeria and India, a few strategies were pushed:
- A core focus on data reduction and offline mode via Progressive Web Apps, and efficient apps via Android Go, given that downloading an app could cost as much as 5% of a user’s wage in these regions of the world
- A strong focus on using the right metrics to measure performance given slower networks and devices, which meant keying on metrics like:
- First Contentful Paint (is it happening?)
- First Meaningful Paint (is it useful?)
- Time to Interactive (is it usable?)
- Long Tasks (is it delightful?)
- The rising popularity of Progressive Web Apps, with key releases of Polymer 2 (10x faster, 70% smaller), React Create App, Vue PWA templates, and Preact CLI for supporting PWAs
- Accelerated Mobile Page getting even faster, with a 2x speed bump when viewed from Google Search Engine Results Pages. Combined with PWAs, the message was a clear: “An investment in AMP today is an investment in our PWA tomorrow”
Overall, all mobile-centric customers were asked to create Progressive Web App experiences that follow the best practice of being reactive, predictable, and in control, matching my favorite quote of the conference: “Human perception of time is fluid, and can be manipulated in purposeful and productive ways.”
From that point of view, the advances in PWAs, AMP, and UX for the next billion users benefits the entire mobile web, with retailers primed to be a major beneficiary by adopting the latest advances in mobile performance.