Thousands of police officers and cameras will be deployed this weekend at Charles’s coronation – along with the relatively recent addition of facial recognition technology.
London’s Metropolitan police have dubbed the security operation “Golden Orb”. It will see officers redeployed from around the country to the capital. The “multi-layered” plan will also feature officers from specialised units including dogs, firearms, and marine support.
Facial recognition software is a type of biometric security measure – like fingerprint or iris scanning. It works by identifying faces in an image, and then rapidly mapping their key features. These landmarks include the shape of the cheeks, the distance between the eyes, the contour of the ears, and the depth of the eye sockets. Then, it compares this key biometric data against a database of known faces, looking for a match.
In addition, snipers will be stationed on some rooftops in central London. The skies above the city will be closely watched. No drones or planes are permitted to fly over central London on Saturday, except for police helicopters and authorised media.
Increased powers
On top of all this, draconian new laws handing police increased powers to curb ‘disruptive’ protests by climate campaigners and others came into force just this week. The Canary’s Tom Anderson reported that whilst official sources insisted nothing was rushed through ahead of the coronation, insider information suggested otherwise:
In fact, the Guardian reported that one ‘senior’ insider, who knew about the discussions between the police and the government, confirmed that the Act had been brought into force early, ahead of the coronation on 6 May.
When the Met disclosed their planned use of the facial recognition, they stated that:
The watch list will be focused on those whose attendance on Coronation Day would raise public protection concerns.
That includes those wanted for offences or with outstanding arrest warrants, the statement added. However, there are problems with how facial recognition works in practice.
Criticisms and bias
Among the different biometric security measures, facial recognition has the lowest accuracy. In fact, its use is already banned in some parts of the US, like Boston and San Francisco.
Further, facial recognition technology has also received widespread criticism for its common racial and gender biases. For example, the Regulatory Review reported that:
In a National Institute of Standards and Technology report, researchers studied 189 facial recognition algorithms – “a majority of the industry.” They found that most facial recognition algorithms exhibit bias. According to the researchers, facial recognition technologies falsely identified Black and Asian faces 10 to 100 times more often than they did white faces. The technologies also falsely identified women more than they did men – making Black women particularly vulnerable to algorithmic bias.
No matter how good a computer can be, it is still limited by the biases of the people who created it. If an algorithm isn’t adequately trained to recognise Black faces, for example, it will show a lower accuracy when identifying Black people.
The Met’s Study
The Metropolitan Police (MPS) and South Wales Police (SWP) carried out a study with the National Physical Laboratory – the Equitability Study – which said that there was no statistically significant difference in the software’s recognition of faces for different genders and ethnicities. However, the hit rate was still slightly worse for Black women overall.
The study also stated that, in order to avoid introducing bias, it used datasets containing equal numbers of members of each demographic group. It then noted that:
For assessment of equitability under operational settings, the results from the large dataset are appropriately scaled to the size and composition of watchlist or reference image database of the operational deployment.
However, it later stated the assumption that:
In operational deployments the demographic balance of the watchlist would be different, and more likely to reflect the demographic balance in society, or of images in the MPS or SWP Custody Image Systems.
In other words, in practical police use the watchlist that makes up the dataset for facial recognition would be made up from the police’s own information.
Back in March 2023, the Casey Report found the Met to be institutionally racist, sexist and homophobic. Black and brown people are consistently over-represented in the justice system. This extends from the point of contact with police, to sentencing in the courts, to the prison system.
Thus, no matter how perfect the facial recognition software is, it will still be subject to the biases of the police operating it. The technology will inevitably persecute more Black and brown people if the watchlist it is linked to is itself biased against them. Of course, that watchlist was created by a police force that is racist to its very core.
Big Brother
Civil liberties organisation Big Brother Watch hit out at the proposed use facial recognition at the coronation, branding it an:
authoritarian mass surveillance tool that turns the public into walking ID cards.
The organisation’s legal and policy officer, Madeleine Stone, added that:
This Orwellian technology may be used in China and Russia but has no place on the streets of Britain.
However, that’s not quite true, is it? This is precisely the kind of inherently racist, authoritarian action that typifies Britain. It will serve to further persecute the exact same groups that our society already persecutes. This is its function, and it is working precisely as intended.
Heavy-handed surveillance by a growing police state is far more British than any king, coronation, or crown. We’d do well to acknowledge that fact.
Featured image via YouTube screenshot/Sky News
Additional reporting via Agence France-Presse