Font Size: A A A


Face recognition and the law

A cyberlaw academic has called for an independent review of how the UK uses CCTV analytics. Andrew Charlesworth, Professor of Law, Innovation and Society at the University of Bristol, points out that the law is lagging well behind developments in surveillance such as AFR (automatic face recognition), as already deployed in some areas of the UK and currently the subject of two court cases. Prof Charlesworth believes it’s the only way forward if such technology is to be used legitimately to tackle crime and terrorism without alienating the public.

In a white paper commissioned by Cloudview, Charlesworth highlights the recently published Home Office Biometrics Strategy (featured in the August 2018 print issue of Professional Security magazine) as an example of the Government’s reluctance to grasp the regulatory nettle and provide a detailed strategy. The document has been criticised by regulators and campaigning groups alike and, he says, created a policy void in which campaigning groups are driving the debate in a primarily negative direction.

Charlesworth believes technology is both the problem and the solution. Citing research which highlights that market forces, social norms and technology architecture can all be used alongside the law as part of an effective regulatory strategy (see Lessig, L. (2000) Code and other Laws of Cyberspace, Basic Books), he says that ‘architecture’ offers a crucial and effective way to regulate CCTV analytics. In other words, technology companies could hard-wire the desired regulatory outcome into the CCTV systems themselves.

Charlesworth says: “The issue with the use of facial recognition technology is the systems which underlie it, such as police databases. We need to design the actual technology so that it controls the flow of data and how it is stored and deleted. This should be reliable, transparent and in full compliance with data protection legislation, because images are just another form of personal data. In my opinion, this is crucial if analytic technologies are to be accepted by the public as legitimate security tools which will help to keep them safe without breaching their human rights.”

James Wickes, CEO and co-founder of cloud-based visual surveillance company Cloudview says: “Right now we are seeing case after case of biometric technology being used as the proverbial sledgehammer to crack a nut. The Government is too timid to say that biometrics is a positive thing if used in the right way, and so the debate is being brought to the public’s attention by campaign groups who for obvious reasons aren’t looking to present the full picture.

“The public are rightly reluctant to hand over their digital data, but the solution is not to ban the technology but to ensure that it’s used properly. This means limiting use to where it’s genuinely needed, and then having effective processes such as privacy impact assessments which are designed into the technology and properly tested, so that our democratic freedoms and human rights aren’t abused.”

Charlesworth’s paper makes recommendations:

· Ensuring that existing general legal regulation is effectively overseen and enforced, with personal image data treated in the same way as other personal data
· Taking a holistic view of CCTV and CCTV analytics use to ensure that proposed regulation is flexible enough to encourage innovation whilst being capable of practical implementation and, where necessary, enforcement
· Considering positive uses for the technology to finding a balance between benefits and risks, whilst encouraging users to self-regulate through appropriate technology architectures.

The paper can be downloaded here.


Related News