By Sarah Townsend
The $500bn technology giant is extending its reach into hardware and artificial intelligence, ultimately aiming to create a sophisticated robot that can communicate with smart-device users to get things done. In London, Google’s senior executives talk Arabian Business through its bold vision
Google plans to solve the problems of this world, one algorithm at a time.
The technology giant — which has doubled its market capitalisation to around $500bn since restructuring to become Alphabet in 2015 — is far more than the internet search engine by which it made its name. It is a massive digital services behemoth, stretching its code-embedded tentacles into new areas such as hardware, virtual reality and artificial intelligence.
This autumn, Google indicated its ambitions for the future as it launched a series of new products, from the Pixel smartphone and Chromecast media player built to rival the iPhone and Apple TV, to the Daydream View virtual reality headset and Google Home connected speaker, which enables users to control smart home appliances.
The company has made no secret of its plans to tap into the hardware market — in April, it appointed former Motorola president Rick Osterloh to head up a dedicated hardware division to unify its disparate hardware projects.
However, Google’s growth plans go way beyond this. Also announced this autumn were software innovations and upgrades to help propel Google, its partners and users into the next phase of computer intelligence. In September, for example, the company launched the Google Neural Machine Translation (GNMT) system to help it improve how it translates languages. For the past decade, Google Translate has used statistical models to translate text word-by-word. The new system deploys sophisticated ‘Machine Learning’ technology to translate whole sentences at a time, using the context to help it deduce the most relevant translation, then tweaking it to sound more natural.
On November 15, Google announced the roll-out of GNMT to eight language pairs — to and from English and French, German, Spanish, Portuguese, Chinese, Japanese, Korean and Turkish — covering more than 35 percent of all Google Translate queries, the company said.
It also unveiled Google Assistant, a smarter version of Google Now (Google’s version of Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana), which enables uses to interact with Google to answer questions or complete tasks. The new Assistant deploys the same Machine Learning software as Google Translate to build up an artificial memory of information and communicate more naturally with users.
In London last month, Google’s chief engineer Behshad Behzadi told media including Arabian Business about the company’s plans to create “the ultimate conversational search assistant”. On a basic level, you can ask the Assistant questions such as, “how tall is the Eiffel Tower?” and a few questions later, “who built it?” and the software will know you are still talking about the Eiffel Tower.
However, as time goes on, machine learning will store up and organise data about users and the world in general to apply contextual intelligence to its responses. Behzadi used the example of Spike Jonze’s 2013 science fiction film Her, in which the character played by Joaquin Phoenix develops a relationship with a computer operating system personified by Scarlett Johansson’s voice, which knows everything about him and can help him address both advanced practical tasks and complex emotional issues.
This level of technological sophistication may still be years away, but even now the system can log details about your shoe size based on a recent purchase and buy you a new pair several months later without you keying in any new information. You can even tell it you are feeling unhappy and it will relate a joke to cheer you up, according to Behzadi.
As of this month, software developers are now able to build Google Assistant into their apps via a new platform called ‘Actions on Google’, enabling the assistant and third party apps to talk to one another. Users will be able to ask the Assistant to, for example, book them a taxi via Uber or a restaurant via OpenTable. Google Assistant launches on Google’s hardware products and will eventually be available on more Android devices, such as smart watches.
Another innovation unveiled in recent weeks was Google’s latest Android wearables project, Wear 2.0. The system has been available in developer preview format since the summer and will be launched to the public next year. It upgrades the current Android Wear project, which embeds the Android operating system and Google apps into smart watches.
Under the system, users will be able to customise watch faces to have third party apps showing on the screen, for example, related to food, fitness and foreign exchange rates. Google is working with watch manufacturers such as Fossil, Tag Heuer and Michael Kors to produce a range of product designs, sizes and colours that incorporate the Android platform. Eventually, the Assistant will be built into wearable devices under Google’s plans.
Google’s vice president of engineering, David Singleton, told reporters in London that in an increasingly ‘multiscreen’ world, wearables have three main advantages when compared to traditional smart devices.
“First, we can start to replace all the junk people carry around with them to get stuff done in the real world, like credit cards, rail tickets and keys. We believe these ‘tokens’ can verify your identity in a more secure and safe way than the metal or plastic in your pocket,” Singleton said.
“Second, they make the user feel like they have superpowers, with access to all those services they care about right here, all the time and in a really frictionless way — something that will be tied together when we launch Google Assistant on wearables.
“Finally, [wrist wearables] go on a place on the body where you can get a lot of sensors, so we think they are a great way to look out for your well-being and health.”
Google is working with research hospitals and other organisations to embed sensors into devices to enable people to detect problems at an earlier stage than otherwise.
Singleton said Google has recorded a faster adoption rate for wearables than for smartphones. “For the first three-four years of having smartphones in the world, total volume shipments were in the hundreds of thousands. We’ve certainly seen a much faster adoption curve for Android Wear watches than in the early days of smartphones.”
However, the International Data Corporation (IDC) reported last month that the worldwide smartwatch market experienced “growing pains” in the third quarter of 2016, resulting in a 51.6 percent year-on-year decline in shipment volumes from 5.6 million to 2.7 million units.
The IDC’s latest market forecast report in September said new smartwatch shipments are expected to see only modest growth for the rest of 2016 “due to late-in-the-year and iterative product releases”. Shipments are expected to reach 20.1 million units in 2016, a 3.9 percent increase from 19.4 million in 2015, according to IDC.
However, Singleton noted that the third-quarter shipment decline can be attributed to a 2015 spike when the Apple Watch was launched to the public. Google’s Pebble 2 watch and Apple’s second-generation smart watch were only available in the last two weeks of the third quarter of 2016, meaning the figures have yet to catch up.
Singleton was reluctant to state 2017 will be the breakthrough year for smartwatches — “I feel we are still very early in this market” — and noted that Google still has to work within the confines of limiting factors such as battery technology, “which is not getting much better very quickly”.
“It’s fundamentally chemistry,” he said. “The battery is a little bag full of chemicals that deliver energy throughout the day. At present, the energy density in electronic devices is already very high.”
He also insisted that Google has no plans to release wearables other than watches for the foreseeable future, despite reports last month that rival Apple was considering entering the smartglasses market. Google was forced to withdraw its own Google Glass product from sale in 2015 following health and safety and other concerns.
Still, with the public increasingly calling for “lots of different shapes and sizes of internet-connected devices”, wearables will play an important part in Google’s growth story, Singleton said. “Over time, we imagine these wearable platforms will be monetised in a similar way to smartphones.”
At present, Google shares mobile advertising revenues with app developers and other partners. Parent company Alphabet posted a third quarter revenue jump of $22.45bn in October compared to $18.68bn the previous year, with chief financial officer Ruth Porat telling investors, “mobile search and video are powering our core advertising business”.
Research firm eMarketer estimates that 59.5 percent of Google’s net global ad revenues will come from mobile internet ads by the end of the year, up from 45.8 percent in 2015. “The new devices are not only aimed at diversifying Google revenues but also enriching Google’s advertising targeting capabilities as consumers engage and share information with Pixel, Google Assistant, Daydream View, Chromecast and other Google ecosystem devices,” eMarketer’s senior forecasting analyst Martín Utreras said in a statement in October.
“We’re just scratching the surface of what is possible with wearables,” Singleton said. “And we’re going to see the Assistant have a lot more power as it links up with these, and other, devices.”
The concept behind many of Google’s latest innovations is artificial intelligence (AI) — the science behind making computers adapt, grow and become more intelligent as they gather data. Machine Learning is one programme that has been developed to achieve this. Unlike traditional computing, which uses a series of logic statements to perform tasks, Machine Learning is based on the concept of artificial neural networks (ANN) technology, which emulates the way synapses work in the brain.
Google CEO Sundar Pichai told a press event in San Francisco in October: “When I look ahead to where computing is headed, it’s clear to me that we’re evolving from a mobile-first to an AI-first world. AI is the future and at the heart of these efforts is our goal to build a Google Assistant.”
A month later, he told media including Arabian Business, app developers and digital policymakers in London: “Unlike traditional areas of computer science, Machine Learning offers a far more efficient way for engineers to code and build new experiences. Computers can iterate and improve at a phenomenal rate and as a result get better at helping people with questions and tasks.
“It might mean something relatively simple, like searching your phone for photos of sunsets [by the user drawing a rough doodle of a sunset on their screen]. Or, it might mean something extremely difficult to solve, like finding a cure for certain illnesses or fighting climate change.
“What’s most exciting is that we’ll be able to bring technology to bear on more problems, and serve more people, than ever before.”
Google, which has been working on standalone projects such as self-driving cars for the past few years, ramped up plans to explore the mysterious world of AI in 2014 with the acquisition of UK-based DeepMind Technologies for a reported $400m.
The company, headed up by neuroscientist Demis Hassabis, develops learning algorithms for applications such as simulations, e-commerce and games. It claimed to have made a breakthrough in its work with Google in March when AlphaGo, the computer programme it built to play the ancient Asian board game Go, beat top professional player Lee Sedol — the South Korean holder of 18 international titles — 4-1 in a dramatic match watched by millions around the world.
During the London event, Hassabis said DeepMind’s long-term goal was to build an artificial hippocampus, the part of the human brain that controls emotion, memory and the autonomic nervous system, which includes dreaming and consciousness.
If this is made possible, tech giants like Google can create the sort of operating system envisaged in the movie Her — a fully-fledged robot capable of helping humans solve the complex problems necessary to make medical breakthroughs and other revolutionary achievements.
Singleton told Arabian Business he thinks the movie is more about “fiction and creative licence than the way technology is actually heading”.
“With Google Assistant, we do want it to feel like you’re talking to a person — you should be able to have a conversation with it and it should be able to get to know you - but it’s not like we’re setting out to build a thing that you become attracted to, or that replaces normal human interactions. It’s more that, by acting human it can provide tremendous value.”
However, with this comes huge responsibility and a host of ethical and other policy issues such as data privacy and cybersecurity, which Google says its teams are working to resolve on a day-to-day basis. Matt Brittin, Google’s president of Europe, Middle East & Africa (EMEA) business and operations, says: “The internet has been a disruptive force. On the one hand, it’s incredibly exciting to be able to access and generate information from diverse sources.
“On the other, it’s quite disruptive of traditional business models and power structures and I think we’re seeing that shift in different countries at different speeds. We don’t claim to have all the answers and are navigating it like everyone else.
“With that in mind, we build and adhere to a set of principles we regard as important, where we try to give control to users, and transparency [about how Google intends to use that data].”
Certainly, the operational challenges facing Google are likely to deepen as the company’s work becomes ever more knotty and sophisticated.