The Metropolitan Police have been given the go-ahead to use live facial recognition cameras on the streets of London.
Privacy campaigners point out that, asides from the obvious “serious threat to civil liberties”, earlier trials were not as successful as previously claimed by Met Police.
The Met reported that the trials showed that the system was able to detect 70% of wanted suspects when they walked past the cameras and that only one in 1,000 people generated a false alert. However, an independent review by Essex University showed that only eight out of 42 matches were “verifiably correct”.
Campaigners expressed further concerns that black and minority ethnic groups would be disproportionately affected because the software has been designed to recognise white faces.
Dr Daragh Murray, co-author of the Essex report, called for live trials to cease immediately due to “significant operational shortcomings” and in order to make sure that all human rights compliances had been met and the use of the technology had been subjected to full public scrutiny and a proper national debate.
A key concern raised by the Essex team was over “numerous operational failures”. The report pointed out that there were “inconsistencies in the process of officers verifying a match made by the technology; a presumption to intervene; how the Metropolitan Police engaged with individuals; and difficulties in defining and obtaining consent of those affected.”