Interviews

AI safeguards

by Mark Rowe

Does AI make you WannaCry? asks Dik Vos, CEO of management consultancy and testing services company SQS.

Almost every day we see or hear about major cyber-security threats and software issues that have serious repercussions on the daily running of society. The recent WannaCry ransomware attack brought the NHS to its knees in May and rendered the digital capability of the NHS useless. Shortly after this, a system collapse, suggested to have been caused by human error, also brought British Airways to its knees and meant the company had to take the drastic action of cancelling all its flights for almost 24 hours. But, of course, we have been watching these types of catastrophic digital downfalls play out in movies and TV shows for years – computers malfunction, software goes rogue and hackers infiltrate the deepest realms of government, business and our private lives.

Life imitates art

Problematically, for the adoption rates of emerging technology, as the public become more aware of the real scale and danger of cyber-attacks, they are becoming increasingly sceptical when it comes to embracing emerging technology. The more life imitates art the less the public believe in the safety, security and quality of technological advances such as artificial intelligence (AI), autonomous vehicles and smart home devices. Our recent study[1] shows that almost 80 per cent of UK adults looking to buy AI products in the future may reconsider due to the threat of hackers targeting this technology. And nearly half (48 per cent) claim they would not purchase AI devices at all due to the threat of cyber-attacks. Indeed, the revelations regarding 1984-esque Orwellian hacking tools developed and used by the CIA and British intelligence to spy on household connected devices highlights how easy it is for smart home systems to be compromised.

The reality of the situation is that consumers are not yet comfortable with embracing the technology they have watched destroy or take over the world in films numerous times. But it is now becoming increasingly obvious that fearmongering is standing in the way of the potential for positive digital transformation and is an issue the industry must tackle head on, if the public are to embrace emerging technology.

Help or hindrance?

Home robots, designed to help people and eliminate household chores, are regularly depicted negatively on-screen. For instance, the AI robotic creation, Ultron, featured in Marvel’s latest Avengers superhero flick was designed with the intention of helping humanity, but inevitably acts in a destructive way that was not intended. In this world, Thor, Iron Man and Captain America were on hand to neutralise the cold hearted robotic threat, but outside the Marvel cinematic universe, the real world does not have a super-powered villain deterrent. Indeed, less than a quarter (24 per cent) of those surveyed in our study believe home robots are safe with over half (52 per cent) concerned they could also fall victim to cyber criminals with malicious intentions of using this technology against them. It is, therefore, more important than ever for companies producing home robots to prove to the public that these devices are not a menace to society, but designed to help make life easier.

On the road to ruin

Of course, autonomous vehicles have similarly suffered a lack of consumer trust as films have shown them fall foul of cyber-attacks and malfunction. Only 28 per cent of the public believe autonomous vehicles will be safer than human operated cars and over 65 per cent are concerned self-driving cars would crash. The threat of cyber-criminals is also a cause of anxiety for consumers with nearly three out of five (59 per cent) convinced self-driving cars could be infiltrated by hackers, resulting in horrific accidents or potential hostage scenarios. Consumers have also seen vehicles being hijacked by cyber-criminals[2] in the real world. In 2015, hackers could infiltrate a connected Jeep’s digital systems and gain control of the vehicle. In a prelude to what we have now seen, during 2004’s I, Robot, Will Smith was so concerned by the threat of his autonomous vehicle being taken over by a rogue AI system called ‘Vicki’ that he would only drive in manual mode. Fortunately, with companies such as Tesla, Apple and Microsoft pouring a huge amount of time, energy and talent in to developing innovative self-driving technology, there is clearly the potential to win the goodwill of the public.

Gaining consumer trust

Though understandable, the public’s reluctance to buy into the latest tech products could see the UK left behind by the rest of the world in the race to effectively leverage technological innovations. The scepticism and concern surrounding recent consumer innovation could severely hamper the UK’s economic growth and further widen the technology skills gap the nation is facing. Though this presents a challenge, it is certainly an area businesses have the power to address. Clearly, consumer trust in technology can be swayed by threats of ever more daring cyber-criminals and safety implications, but consumers can also be influenced by what they see at the cinema and on the TV.

It is up to businesses to prove to the public that doomsday predictions made in films are works of fiction and nothing more. Businesses have an opportunity to prove to consumers that these products will have a positive societal impact. In turn, it is beholden on them to place a high importance on prioritising quality to protect customers, thus gaining consumer confidence in regards to emerging technologies. Quality must be integral, from concept to the finished product.

Ultimately, if innovations such as AI, autonomous vehicles and smart homes are to become part of everyday life, government and businesses have a duty to prove to the public that every precaution has been taken to safeguard and protect human life. Quality is non-negotiable and by proving this is at the core of innovation, businesses will begin to change the current public perception of advanced technology.

Related News

  • Interviews

    Frisco award for Mike Levi

    by Mark Rowe

    Mike Levi, Cardiff University Professor of Criminology and an authority on counter-fraud, was presented with the 2014 Sellin-Glueck award for comparative and…

  • Interviews

    Passwords shared

    by Mark Rowe

    Nearly half (44 per cent) of internet users admit to having shared their passwords with somebody or leaving them visible for people…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing