Facial recognition technology has numerous harmless applications. You may use it every day to unlock your phone or gain access to secure areas.
But facial recognition can also be used for surveillance purposes, a usage that has gathered plenty of excitement in the security industry and those who provide products to it.
However, we should balance our excitement with caution. As with all surveillance, facial recognition must be used responsibly and with restraint to protect the privacy and rights of those who it is supposed to protect.
The larger our digital footprints, the bigger the risk that our data ends up in the wrong hands. At a fundamental level, we all deserve to have access to our personal data, have say over how it is used and request for its removal.
Privacy has become the most pressing danger of the digital age, with the once-niche threat of doxing (publicly revealing someone’s private information – or threatening to) now a serious risk for anyone with a digital presence.
It is easy to see how facial recognition software can further weaken our already tenuous grip on our privacy. When our faces are captured and stored, what is done with this data? Who has access to it? And what will they use it for?
Earlier in the year, South Wales police faced a legal challenge against their use of facial recondition software in public crowds. The biometric data of 500,000 faces had been captured by the surveillance system in question, a usage which was ruled as lawful by the high court, with implications for the usage of similar systems across the UK.
Privacy advocates had concerns about innocent people having their biometric data captured and stored without consent, how this data is used and whether this data is stored securely.
Even if you have complete faith that police forces will use facial recognition software responsibly and without overreach, if the data leaks and gets into the wrong hands then millions would suffer a severe breach of privacy.
When facial recognition software proliferates for private use, the number of potential leaks or misuses of the technology will only multiply. There must be extensive safeguards and effective oversight bodies in place for such technology to be used safely – before it is too ubiquitous to control.
Racial bias is a problem that needs to be tackled across the security industry. While racial biases in security guards or hiring practices can be mitigated with the appropriate training and better management, it’s much more difficult to root out of facial recognition algorithms that are assumed by the end-user to be neutral.
There have been high profile controversies over racial bias hardcoded into algorithmic services provided by Google and Twitter, but such incidences are much more dangerous when the algorithms are used in facial recognition software used to determine who is and is not a threat.
Despite widespread outcry in the technology community, police and security forces in the UK and across the world continue to deploy facial recognition technology that is at high risk of having built-in racial bias.
We urge end-users of facial recognition software to remember that algorithms are only as neutral as those who write them. Before integrating such software into your security systems, be sure to scrutinise the developer’s commitment to rooting out any racial bias in their products.
At Magenta Security, we always balance the excitement and possibilities provided by new technology against how it might clash with our core mission – to make people and places safer.
Magenta Security provide award winning security services throughout the UK. We are in the top 5% of ACS approved contractors and were the first security company in Europe to be awarded ISO 14001 for our environmental management systems.
New service - from as little as £1 per cameraFind out more