By Patrice Caine
With the growing number of cyberattacks, personal data breaches, cases of identity theft, and mounting concern that artificial intelligence might one day take control of our lives, confidence in new technologies is beginning to crack. What can be done to remedy the situation? Patrice Caine, Chairman and CEO of Thales talks about what ultra-connected individuals, businesses and nations can do to reduce the risk of exposure
In our ultra-connected societies, individuals, businesses and nations are more exposed to digital risks than ever before.
Billions of digital data records are exchanged every day, with data traffic expected to grow by a factor of 50 between 2010 and 2025. And when these exchanges are poorly protected or not encrypted, more data means more vulnerabilities. The number of attacks has also continued to grow, with an estimated 14 billion data records lost or stolen since 2013, according to The Thales Breach Level Index, which reflects data records that have been lost or stolen since 2013. The many data breaches exposed in recent years have also played a role in eroding long-term user confidence.
Added to this fear of hacking or digital identity theft, there is mounting concern over artificial intelligence, whose computing power clearly cannot be matched by the human brain. For example, when an aircraft flies a one-hour reconnaissance mission covering an area of 3,000 km², it takes experienced military personnel an average of 300 hours to analyse the images.
With the AI-assisted image recognition system being tested today, that volume of data can be analysed in real time! Broadly speaking, however, even though artificial intelligence can process enormous volumes of data more efficiently than the human brain, it is still very hard to provide a mathematical explanation of how the results were achieved. This “black box effect” can present real problems if these results are going to influence the human decision-making process.
Can digital technologies be trusted in these conditions? The simple answer is that confidence does not depend on the tools themselves but on how and where we use them and the limits we impose.
Combining the power of data-driven artificial intelligence with the reliability of model-based artificial intelligence offers the best of both worlds.
The first step that needs to be taken to restore user confidence is to design AI tools that are explainable — in other words tools that not only produce results but can show how they produced them.
As well as data-driven AI based on deep learning, there is a need for more model-based AI. This kind of AI also relies on algorithms, but the underlying models include legal, professional or ethical rules and principles that are established beforehand. That makes the results much easier to explain. Combining the power of data-driven artificial intelligence with the reliability of model-based artificial intelligence offers the best of both worlds!
Also for the sake of transparency and explainability, another promising area of exploration is AI’s capacity to interact with humans to explain its decisions in real time and in natural language.
So making AI explainable is one important way to restore confidence. The other crucial step is to provide security technologies that protect people’s digital identities effectively.
Our take on this is that protection will never be effective if it isn’t easy to use. Improving digital security by piling on additional layers of protection is likely to be counterproductive, because users have a tendency to sidestep measures they find too restrictive. We know they often use the same password everywhere, for example, even though it exposes them to a greater risk of being hacked. We need to offer new ways of improving user security without messing up the user experience.
Protection will never be effective if it isn’t easy to use.
These new solutions exist. Biometric technologies like facial recognition and fingerprint authentication are already used to secure ID cards, passports and driving licences, and they have huge potential in consumer applications too. Some telephone banking applications already use digital fingerprints or facial ID to authenticate transactions. These biometric technologies have two key advantages in that they provide extremely effective security and are also very easy to use — in fact users becomes their own passwords. The objective is to secure the entire digital experience, from end to end: fingerprint ID enables access to the service, encryption guarantees data integrity, and accounts can be deleted at the end of the transaction if users don’t want their data to be utilised for other purposes.
The third way to restore confidence is to establish ethical uses of technology so that humans always remain in control of all the final decisions.
The purpose of AI is to “augment” humans by helping them make the best decisions.
This brings us back to artificial intelligence. The goal isn’t to replace humans by AI — which has neither the flexibility to adapt to the unexpected nor the innate ability to multi-task — but rather to get humans and machines to cooperate. The ultimate aim is to harness the tremendous computing power of AI to guide choices that can only be made by humans. The purpose of AI is to “augment” humans by helping them make the best decisions.
In artificial intelligence and digital security, confidence is the key to user acceptance.
In artificial intelligence and digital security, confidence is the key to user acceptance. Our job is to establish a framework within which user confidence will flourish.
With a presence in the region since 1978, the UAE has evolved into a key player in Thales’ target markets on a global scale. The Group’s vision in the UAE is to bring security, safety and growth; whilst evolving state sovereignty, providing sustainable economic development, and fostering provincial talent. Thales serves key sectors including aerospace, space, ground transportation, digital identity and security and defence and security.