Globally, as a society, we have come to accept surveillance cameras as a part of our everyday lives. In fact, CCTV cameras have become so commonplace that it is estimated that there are over 600,000 surveillance cameras in London alone – that’s one for every 14 people.
Clearly, we have become so used to the constant presence of CCTV that most of us wouldn’t think twice about there being a camera in public spaces; they’ve become part of the furniture, very often going unnoticed.
Of course, the high presence of surveillance cameras brings with it benefits for society — the main one being to act as a crime deterrent, helping to maintain public order and reassure the public of their safety.
But not everyone is happy with the constant video surveillance we are under or, rather, the extent of the information these surveillance cameras are now capable of gathering about us.
Developments in artificial intelligence (AI) have changed the face of surveillance technology in recent years. Gone are the days when CCTV cameras had a passive role, simply capturing a scene and the recording being viewed by a human operator when necessary.
Combined with AI, computers are increasingly capable of identifying and analysing surveillance footage in real time, meaning that cameras can recognise individuals and find out their identity among the mass of people walking down the street.
This use of such technology – facial recognition software – has hit the headlines in recent months, and is causing controversy worldwide.
Take a look at Hong Kong, for example, where anti-government protests have been escalating for nearly six months. The recent decision by Hong Kong’s Chief Executive, Carrie Lam, to prohibit the wearing of face masks in public in a bid to tackle the violent clashes proved to be very controversial.
Many pro-democracy protesters saw this as an attempt by the state to control its citizens. They believe the anonymity a mask provides to be essential in avoiding the perceived threat from mainland China – where facial recognition is used almost everywhere, and surveillance a fundamental tool to maintain an authoritarian rule.
The notion that facial recognition software is an invasion of our privacy has raised its head much closer to home too. In 2019, the Metropolitan Police carried out ten trials of live facial recognition (LFR) around the capital to determine whether the technology would be of use to the police force in the future in helping to tackle crime — a decision that is still in the process of being made.
It’s not only state use of LFR that is causing controversy — private companies have been using it too. In the past year, various tales have emerged of live music venues and bars trying to use facial recognition software for good, to identify ‘suspicious’ individuals at events or even see who was first in the queue at the bar.
While superficially these seem to be viable uses of the technology, the necessity has been called into question by many, including campaign group, Big Brother Watch.
We must also remember that while developments in AI bring some huge benefits to society, it is still not totally free from bias. Arguably, AI’s use could put individuals at unnecessary risk of being stereotyped based on their appearance.
A final case that made national news in September was the revelation that the developers of King’s Cross Central had placed CCTV cameras using facial recognition software on the site. While this is an individual case, and the recognition technology was apparently last used in March 2018, it represents a bigger issue, demonstrating the potential for facial recognition technology to be used without the knowledge of the authorities, or the consent of the general public.
Such private sector use is arguably more of a cause for concern than government use. It’s harder to regulate who is using LFR technology, how the footage is being used, and who has access to the data. And this raises questions about how necessary LFR really is, and how much privacy and freedom we truly have nowadays.
There are certainly valid arguments both for and against the use of facial recognition software in public spaces.
When used for good, and in the right hands (who that may mean is a whole other story), these developments could help keep our public spaces far safer than ever before, taking criminals off the streets before we even really know they’re there.
But at what cost? Being able to trace our every movement? Diminishing our civil liberties and freedoms?
It seems, in an age of constant technological developments, there’s a fine line between keeping people safe and protecting human rights. In the case of facial recognition software, it’s a blurred line that is yet to be properly determined.