#30287
Andrew Scicluna
Participant

Article 2 – Metropolitan Police’s facial recognition technology

A recent report has put the Metropolitan Police’s facial recognition technology under criticism, which found it to be 81% inaccurate, or about four of five flagged people being innocent. Out of the 42 people matched by the technology, only eight were correctly verified. This evaluation, conducted by researchers led by Professor Pete Fussey and Dr. Daragh Murray, has questioned the reliability and has concerns about its legality and use on ethical grounds, and calls for Scotland Yard to stop its use of this technology.
The Police force however, has defended the technology and claim their system misidentifies only one in every 1,000 people. They explain that their figures are based on comparing unsuccessful matches to the total number of evaluated faces.
The report also highlights legal issues, including the challenges that might arise around obtaining consent from the public. People who were avoiding the cameras during the trials were sometimes treated as suspicious, and some were even fined or detained for unrelated small offense. The researchers pointed out in their findings that this approach diminishes the idea of consent and adds to the concerns of what they called “surveillance creep.” It also found that the databases of wanted individuals used by the facial recognition technology, called “watch lists” were sometimes outdated, which led to people being flagged even after their cases were resolved. In some cases, there was no clear reason for why a person was on the list, adding further concern.
This report has called for the Met to put a halt to the program. Legal groups like Big Brother Watch and Liberty and pushing for a legal review, arguing that this system is violating privacy rights. The director of Big Brother Watch, Silkie Carlo said the report is definitive enough to urge the Metropolitan Police to abandon the system, and privacy advocates are determined to see this issue resolved in court.
In my opinion while technological advances are exciting, relying on flawed systems like facial recognition is risky. While it does have some potential for future use, it is not yet reliable enough to serve as the primary method for critical decisions like distinguishing the difference between a criminal and an innocent person for public safety. If these systems are to be implemented, it is crucial that there would still be an element of human overseeing, as errors can have drastic consequences for innocent people. Better technology is crucial for moving forward, but for the time being, until it is consistently accurate, it should support police work, not manage it.