top of page
Screenshot 2025-04-22 at 22.58.25.png
Screenshot 2025-04-22 at 22.58_edited.png
  • LinkedIn
  • Facebook
  • X
  • Instagram
  • Youtube

‘Big Brother is Watching You’: The Use of Live Facial Recognition by Law Enforcement Agencies and International Human Rights Law

The voice came from an oblong metal plaque like a dulled mirror which formed part of the surface of the right-hand wall […]. The instrument (the telescreen, it was called) could be dimmed, but there was no way of shutting it off completely.[1]

 

1. Introduction


Prior to the spring of 2020, two notable events occurred in quick succession: surveillance vans equipped with Live Facial Recognition (‘LFR’) technologies were seen patrolling Cardiff City Stadium before the club’s football match with Swansea City,[2] and the Metropolitan Police Service announced that it had installed an LFR camera at Oxford Circus, a highly-trafficked area in the Westminster borough of London.[3] Facial recognition is by no means the only way in which personal data is captured and used by the police,[4] but it is certainly one that has gripped the imagination of the public. Civil society groups have described the technology as an ‘Orwellian mass surveillance tool’ not unlike the instruments used in the fictional state of Oceania in 1984.[5] The increasing use of facial recognition technology by law enforcement agencies, therefore, poses an acute regulatory challenge, with many concerned that ‘existing legislative and policy frameworks are outdated and fail to account for the new and various ways in which biometric data is, or might be, accessed and used’.[6]

 

This article seeks to contribute to evolving scholarship on the use of LFR by law enforcement from the vantage point of international human rights law (‘IHRL’).[7] It begins by situating the regulation of LFR within the parameters of the relationship between the law and technology, before elaborating on how LFR operates, as well as on the justifications for its deployment by police forces. The article then examines the main sources of international law on discrimination that apply in this context, focusing in particular on the issue of racial bias. It does not, however, set out an exhaustive checklist of the potential systemic biases inherent in LFR technology; rather, it addresses the underlying normative question of when, if ever, the police may make use of such tools in a manner compliant with IHRL principles on equality and non-discrimination.

 

2. LFR and the relationship between law and technology

 

The nature of the relationship between the law and technology has been debated at length. Proponents of technological exceptionalism have suggested that the ‘essential qualities’ of technology ‘drive the legal and policy conversations that attend them’.[8] This school of thought reinforces what Carter and Marchant have described as the ‘pacing problem’: the idea that ‘rules-based regulation cannot keep up with the pace of new developments’.[9] Indeed, the notion that the law is ill-equipped to deal with the extent and rate of technological advancement has received traction amongst scholars. Johnson and Post, for example, in response to the limitations of ‘territorially-based law-making and law-enforcing authorities’,[10] have argued in favour of a new form of Internet governance, unbound by geographical boundaries. Similarly, Calo has suggested that the interaction of new and emerging technologies with outmoded legal frameworks prompts a ‘systematic change to the law or legal institutions in order to reproduce, or if necessary, displace, an existing balance of values’.[11] This process is not framed as static in nature, but as one continually on the move, since ‘technology has not stood still’.[12]

 

However, there are those that have expressed reservations about attempts to linearise the relationship between the development of the law and technological change. Jones, in particular, has criticised theories of technological exceptionalism for failing to account for the fact that ‘the story of law and technological change is much more varied, messy, and political’.[13] According to Jones, rather than linearly following technology, ‘a great deal of legal work shapes technology and the way in which it will be understood in the future’, with ‘scholars, judges, regulators, and legislators often [making sense] of technologies in a way that is forward-looking’.[14] In a similar vein, Balkin indicates that it is ‘[unhelpful] to speak in terms of ‘essential qualities’ of a new technology that we can then apply to law’.[15] Although acknowledging that Calo’s thesis ‘is destined to be the starting point for much future research in the area’, Balkin argues in favour of assessing the evolving uses of emerging technology for the purposes of developing the law, given that ‘people continually find new ways to employ technology for good or for ill’.[16]

 

The global governance of LFR and so too its uses in law enforcement contexts are thoroughly interconnected with competing conceptions of the relationship between law and technology. On one hand, the essentialist tendencies of technological exceptionalism have been borne out in descriptions that stress the novelty of facial recognition as ‘an attractive solution to address many contemporary needs for identification’, bringing together ‘the promise of other biometric systems […] and the more familiar functionality of visual surveillance systems’.[17] On the other hand, a number of authors have raised concerns about the uses (and abuses) of such technology on human rights grounds.[18] For instance, scholars including Bu have attempted to evaluate the lawfulness of facial recognition technology by reference to both EU data protection rules and the privacy jurisprudence of the European Court of Human Rights (‘ECtHR’).[19] Whilst attention has naturally focused on the extent to which infringements upon privacy and data protection rights may be ‘justified as a necessary and proportional invasion’,[20] the extent to which the use of LFR has been tested against international legal principles and standards on discrimination remains under-developed.[21] Indeed, in the British context, Bradford, Yesberg, Jackson, and Dawson have described ‘potential bias and discrimination’ as a ‘significant controversy’ in the use of LFR by police forces—a concerning possibility which requires consideration of two issues.[22] First, what justifications have been raised in favour of the use of LFR by police forces? Second, what specific aspects of LFR usage by law enforcement raise the spectre of potential discrimination?

 

3. The operation of LFR

 

In perhaps the most sustained independent analysis of the use of LFR in the context of British policing, Fussey and Murray identify four salient features.[23] First, facial recognition technology allows ‘for the real time biometric processing of video imagery’; unlike CCTV cameras, the process by which individuals are identified and tracked through their facial features is partly automated.[24] Second, compared to open street surveillance cameras, LFR possesses ‘additional and powerful capabilities’ in the form of ‘enhanced data-matching’.[25] Third, LFR is integrative in nature: it is possible for such technology to be accommodated into ‘police body worn cameras or city-wide surveillance camera networks, on a 24/7 basis, and for the resultant data to be subject to automated analysis’.[26] Fourth, the objects identified in the deployment of facial recognition technology are not disposable or transferable: the software ‘creates a digital signature of identified faces, and then analyses those digital signatures against a database (referred to as the ‘watchlist’) in order to determine whether there are any matches’.[27]

 

4. The rationales for police use of LFR

 

Having briefly outlined how facial recognition technology operates, there is one remaining question that warrants consideration: why, exactly, has LFR been promoted in modern policing practices? The answer to this question rests on the belief that the technology advances the aims of effective policing and crime prevention. It is my proposal that this assumption is predicated on two rationales—efficiency and impartiality—and that the orthodoxies upon which these rationales in turn rest have unstable foundations. The gaps and inconsistencies responsible for this instability are significant to the extent that they impinge upon the discrimination risks associated with the use of facial recognition tools by law enforcement agencies.

 

a. Efficiency

 

In attempting to justify its deployment by police forces, advocates of LFR have repeatedly lauded its ‘efficiency’ in enhancing surveillance capabilities. As Hamman and Smith have put it, ‘[facial recognition] technology has benefited law enforcement in innumerable ways, such as creating reliable evidence, enabling efficient investigations, and helping to accumulate data that allow law enforcement to react quickly and effectively’.[28] This emphasis on the functionality of facial recognition technology can be directly correlated with the increasing use of such tools ‘to ensure security and combat terrorism around the world’.[29] Yet LFR also has the potential to be employed in situations that fall outside the scope of national security and counterterrorism operations. As observed by the Divisional Court in R (Bridges) v Chief Constable of South Wales and Ors [2019] EWHC 2341: 

 

Like fingerprints and DNA, AFR technology enables the extraction of unique information and identifiers about an individual allowing his or her identification with precision in a wide range of circumstances. Taken alone or together with other recorded metadata, AFR-derived biometric data is an important source of personal information.[30]

 

There is cause for concern with regard to the potential mission creep of facial recognition technology into operational policing writ large. Whilst efficiency-based arguments in favour of using LFR in law enforcement contexts hinge upon its putative benefits, what is less clear is whether they are grounded in empirical evidence. At present, the Face Recognition Vendor Test, a series of large-scale assessments of LFR systems realised by the National Institute of Standards and Technology (‘NIST’) in the United States, represents the most comprehensive independent evaluation of cross-demographical performance inconsistencies in such technology.[31] Its 2019 study of 189 LFR software programs identified that racial minorities were up to 100 times more likely to be wrongfully identified than White males in facial recognition deployment exercises.[32] Other analyses have also identified problems with LFR software in relation to the accurate identification of juveniles and darker-skinned women.[33] These inconsistencies raise serious concerns as to the effectiveness of LFR as a mode of data-driven policing and undermine the extent to which the attendant disruption to ‘the balance of power between governors and the governed’ is normatively warranted.[34]

 

b. Impartiality

 

Morgenthau’s conception of the police as ‘law-enforcing agency’ usefully illustrates how the notion of police impartiality is, at its very core, a protean concept.[35] On one hand, the meaning of police impartiality may be viewed as identical to that of equality before the law: ‘the police are in this sense impartial if they mete out equal treatment to persons and situations which the law requires to be so’.[36] Nevertheless, the impartiality of the police also ‘performs a social and political function similar to that performed by the reputation of the courts for being the impartial “mouthpiece” of the law’.[37] As it is the purpose of law enforcement agencies in this context to ‘defend that legal order and maintain that status quo’, an institution whose purpose is ‘the defense of the legal order cannot be impartial with regard to it, but must rather be for or against it’.[38]

 

With respect to the use of LFR by law enforcement agencies, impartiality is closely allied with the notion of accuracy. The NIST’s Face Recognition Vendor Test clearly illustrates this association.[39] NIST assesses the correctness of classification procedures so that ‘face recognition system developers, and end users should be aware of these differences and use them to make decisions and to improve future performance’.[40] In correlating the ostensible precision of algorithms with notions of fairness and objectivity, the use of facial recognition technology by law enforcement agencies promises to improve the accuracy of suspect identification[41] and curb discriminatory uses of police discretion, such as in the stop and search context.[42] 

 

There is, however, one problematic point that can be drawn from the use of impartiality as a rationale for the deployment of LFR by police forces. As mentioned earlier, the social and political function of police impartiality—that is, to defend the legal order and maintain the status quo—inevitably raises the question of whether LFR may in fact entrench existing biases amongst law enforcement agencies. The long and difficult history of communities of colour feeling the sharp edge of the law as a consequence of police action heightens the significance of such an inquiry.[43] Given that the disproportionate representation of Black people in the criminal justice system has, in the words of one author, become ‘the single most vexed, hotly controversial and seemingly intractable issue in the politics of crime, policing and social control’, the reinforcement of this status quo via LFR technology engages fundamental issues as to the categorisation of relations between the police and minority ethnic communities.[44]

 

5. International human rights law on discrimination

 

The principle of non-discrimination has served as a constant since the nascence of human rights law in the form of the Universal Declaration of Human Rights, which applies without ‘distinction of any kind’.[45] The Intentional Convention on the Elimination of All Forms of Racial Discrimination (‘CERD’) and the International Covenant on Civil and Political Rights (‘ICCPR’)—the two most important treaties on the prohibition of discrimination in the context of civil liberties—provide fertile ground for an analysis of whether the use of LFR by law enforcement agencies is IHRL-compliant.

 

International law prohibits both direct and indirect discrimination and places positive and negative obligations on States to uphold these prohibitions. The CERD prohibits discrimination on the basis of ‘race, colour, descent, or national or ethnic origin’,[46] and obliges States, in particular, to legislate against racial discrimination and ensure compliance amongst public authorities with the obligations set out in legislation.[47] Discrimination on the same grounds is prohibited under the ICCPR. Indeed, the ICCPR further clarifies that the right to not be discriminated against ‘on the ground of race, colour, sex, language, religion or social origin’ is non-derogable,[48] even in situations such as public emergencies where States may permissibly limit other rights (freedom of expression, for example).[49] Framed within the parameters of this overarching framework, the question that falls to be determined is whether international law prohibits the use of race, or a proxy for race, as a factor in the deployment of LFR technology by law enforcement.

 

For the sake of this inquiry, it is necessary to outline the elements of direct and indirect discrimination under international law. Direct discrimination occurs when an act has the ‘purpose’ of ‘nullifying or impairing the recognition, enjoyment or exercise’ of a particular group’s rights or freedoms.[50] Of most direct relevance to the use of LFR by law enforcement, indirect discrimination involves actions or policies that are ostensibly neutral but which in practice have a disproportionate impact, or ‘effect’, on a particular group’s enjoyment of rights or freedoms.[51] The particular context and circumstances of an act are relevant considerations for these purposes; as confirmed by the Committee on the Elimination of Racial Discrimination in L.R. v Slovakia, ‘indirect discrimination can only be demonstrated circumstantially’.[52] Notwithstanding the recognition of ‘purpose-based’ and ‘effect-based’ discrimination under IHRL, not all forms of disparate or disproportionate treatment are impermissible. A difference in treatment will not constitute discrimination if the criteria for such differentiation are ‘legitimate’.[53]

 

The practice of ‘racial profiling’ in policing—namely, ‘the use of race, ethnicity, religion, or national origin rather than individual behaviour as the basis for making law enforcement decisions about who may be involved in criminal activity’—exposes the slippage between ostensibly legitimate forms of discrimination and conduct which violates international law.[54] Bowling and Phillips, for instance, have argued that crime statistics could be seen as both a ‘shield’ for law enforcement agencies in the United Kingdom ‘with which to defend the disproportionate use of stop search’, and a ‘sword’ by which ‘police officers might select suspects’ within groups purported to be likely participants in criminal activity.[55] In the American context, Morris has demonstrated how violations of international law are revealed in evidence concerning the disparate impact of pretextual traffic stops on African Americans.[56] These instances of disproportionality do not just incur profound costs at an individual level, but also directly impinge upon the relationship of minority ethnic communities with the criminal justice system: ‘the systematic profiling of black and minority ethnic groups inevitably leads these groups to lose faith in the very authorities that are meant to protect them’.[57] As a consequence, racial profiling conducted under the auspices of crime prevention objectives undermines the very enjoyment of rights guaranteed under the CERD and ICCPR. The significance of LFR in this paradigm relates to its use as a medium by which discriminatory police practices are not only reflected but entrenched.

 

A textual analysis of the CERD and its interpretative instruments illustrates its potential utility as a means of regulating the deployment of LFR by the police. The object and purpose of the Treaty, according to its Preamble, is to ‘combat racist doctrines and practices’.[58] Article 2 of the CERD obliges States to ‘engage in no act or practice of racial discrimination’, [59] as well as to ‘amend, rescind or nullify’ any local, national or governmental policies which are identified as having the effect of perpetuating discrimination.[60] Regarding the administration of justice, General Recommendation No. 31 provides that States must take necessary steps to prevent ‘questioning, arrests and searches’ based solely on ‘that person’s colour or features or membership of a racial or ethnic group’, or any profiling which exposes him or her to greater suspicion.[61] Correspondingly, General Recommendation No. 13 provides that law enforcement officials must be ‘properly informed about the obligations their state has entered into under the Convention’ and conduct ‘intensive training to ensure that in the performance of their duties they respect as well as protect human dignity and maintain and uphold the human rights of all persons without distinction’.[62] These provisions are significant to the extent that they provide a framework against which the discriminatory impacts of facial recognition technology can be tested, recognising the role of the police as an entry point into the criminal justice system and a means by which States can exercise coercive measures.

 

6. LFR and racial bias

 

The issue of racial bias is a point of contention amongst critics and proponents of LFR.[63] Whilst comprehensive studies such as NIST’s Face Recognition Vendor Test have identified extensive demographic biases in deployment exercises, localised assessments conducted in recent years suggest that these discrepancies have been overstated. Indeed, in April 2023, the National Physical Laboratory (‘NPL’), the national measurement standards laboratory of the United Kingdom, tested two facial recognition systems currently used by the Metropolitan Police Service and South Wales Police.[64] Although the NPL study confirmed that the LFR software had the poorest performance on Black-Female faces, it noted that the demographic inconsistencies in accuracy rates were statistically insignificant.[65] The suggestion that statistical insignificance is decisive in overcoming the issue of demographic bias, however, dislocates the use of such technology from its immediate context. The implications of inaccuracies in LFR, however miniscule they are purported to be, can be severe when paired with historical racial disparities in police contact. A false positive match places an individual at risk of being subjected to escalated action in the form of police stops, detentions, and arrests because of an incorrect match against a database.

 

The divergence in empirical evidence on demographic bias goes to the heart of an assessment of the legality of LFR. In Bridges, the Court of Appeal determined that, in failing to take reasonable steps to make enquiries about whether facial recognition software had bias on racial or sex grounds, the South Wales Police had failed to comply with the Public Sector Equality Duty (‘PSED’) under section 149 of the Equality Act 2010.[66] Despite the lack of clear evidence of racial bias, as Gikay observes, the Court’s findings in respect of the PSED ‘demonstrates that the current equality law [in the UK] can effectively address the concerns of inaccuracy and bias that are raised by the use of LFR technology by law enforcement authorities’.[67] In light of the widespread cross-jurisdictional use of LFR technology, the question remains as to whether IHRL principles on discrimination can be used to these same ends.[68]

 

7. LFR and indirect racial discrimination

 

Whilst law enforcement agencies generally do not make use of LFR technology for the purposes of direct discrimination, the effects of potential biases and errors, as discussed, mean that these tools can have a disproportionate impact on racial minority groups compared to other ethnic groups. This amounts to a form of indirect discrimination under the CERD, so long as the ‘criteria for such differentiation, judged against the objectives and purposes of the Convention [are not] legitimate’.[69]

 

The jurisprudence of the ECtHR is particularly instructive with regard to the issue of legitimate differentiation under international law.[70] The ECtHR has defined discrimination as differential treatment, ‘without an objective and reasonable justification, [of] persons in relevantly similar situations’.[71] A difference in treatment is discriminatory under Article 14 of the ECHR if it has no objective and reasonable justification—that is, if it does not pursue a ‘legitimate aim’ or if there is not a ‘reasonable relationship of proportionality between the means employed and the aim sought to be realised’.[72] Examining a discrimination claim in this context thus requires a two-tiered approach, focusing first on the aim pursued, and second on the relationship between the impugned difference in treatment and the realisation of that aim.

 

Although the aim of efficient and effective law enforcement is likely to be recognised as a legitimate objective, the second, more difficult undertaking relates to whether the use of biased, error prone LFR technology is proportionate to this aim. Given that Contracting States’ margin of appreciation is narrower in cases involving so-called ‘suspect’ discrimination grounds, in which the category of race falls,[73] ECHR case law mandates that ‘very weighty reasons’ are required to justify why differential treatment appears both suited to and necessary in the realisation of a legitimate aim pursued.[74] Any putative gains in efficiency and so-called impartiality from the perspective of law enforcement agencies must, therefore, be balanced against the infringement upon discrimination rights and the plethora of other freedoms affected by dint of LFR technology.

 

The unique human rights impacts of LFR suggest that a restrictive approach should be adopted in such a proportionality assessment. As previously mentioned, the use of LFR technology in law enforcement contexts negatively affects the right to a private life due to the ‘capture and processing of biometric information without an individual’s consent’.[75] Deploying LFR technology in public places also has profound negative implications for the enjoyment of rights to freedom of expression and association. The ECtHR’s recent determination in Glukhin v. Russia is a valuable illustration to this effect.[76] In Glukhin, the Court determined that the conviction of a protestor for failing to notify Russian authorities of his intention to hold a solo demonstration, and the use of LFR technologies within this context, violated his rights to respect for his private life and freedom of expression, protected under Articles 8 and 10 of the ECHR respectively.[77]

 

Viewed in light of this catalogue of rights violations, the potential demographic biases in LFR technology ‘portend a strong foundation for further restricting how governments use’ such tools on indirect discrimination grounds.[78] Indeed, as previously stated, Article 2 of the CERD mandates States to nullify laws and regulations which have the effect of perpetuating racial discrimination.[79] General Recommendation No. 31 further provides that States must ‘implement national strategies or plans of action aimed at the elimination of structural racial discrimination’.[80] The significance of these provisions in relation to the governance of LFR is made clear by the absence of a legal framework in various jurisdictions that clearly define the use, regulation and oversight of such technology.[81] In the absence of targeted regulation, the increasing deployment of facial recognition technology in policing will only perpetuate existing societal divisions and erode confidence in the criminal justice system amongst minority ethnic communities.

 

8. Conclusion

 

Like the telescreens that captured Winston’s attention in the opening pages of 1984, the human rights impacts of LFR may be dimmed, but cannot be shut off completely. Taking stock of the positive IHRL obligations on States and the rights affected by indirect discrimination, this paper has suggested that the use of biased and error-ridden facial recognition technology is unlikely to be proportionate to a legitimate aim of effective law enforcement. Such practices should, in this author’s opinion, amount to impermissible indirect discrimination under the ECHR and under international law in general. Whilst this conclusion is necessarily qualified by the divergence in evidence on demographic bias, it recognises the possibility of technological advances either heightening or mitigating the risk of indirect discrimination in the deployment of LFR. This latent risk means that police forces making use of such technology ought to be alive to the prospect of racial profiling occurring and the possibility of trust amongst minority ethnic groups being further diminished.

Udit Mahalingam


Udit Mahalingam is a recent LLM graduate from the University of Cambridge. He is interested in civil liberties and human rights issues, particularly within the context of national security and law enforcement practices.

[1] George Orwell, 1984 (Houghton Mifflin Harcourt 2013) 3-4.

[2] Steven Morris, ‘Anger over use of facial recognition at South Wales football derby’ The Guardian (London, 12 January 2020) <https://www.theguardian.com/technology/2020/jan/12/anger-over-use-facial-recognition-south-wales-football-derby-cardiff-swansea> accessed 1 January 2024.

[3] Zoe Tidman, ‘Metropolitan Police deploys facial recognition in central London with two hours’ warning’ The Independent (London, 20 February 2020) <https://www.independent.co.uk/news/uk/crime/met-police-facial-recognition-technology-city-westminster-a9346831.html> accessed 1 January 2024.

[4] Indeed, the use of LFR by law enforcement can be viewed as part of the broader phenomenon of so-called ‘predictive policing’. See Andrew Guthrie Ferguson, ‘Big Data and Predictive Reasonable Suspicion’ (2015) 13(2) Pa. L. Rev. 327.

[5] Sam Shead, ‘“Orwellian” Surveillance Cameras Face Legal Battle’ (Forbes, 25 July 2018) <https://www.forbes.com/sites/samshead/2018/07/25/orwellian-surveillance-cameras-face-legal-battle/?sh=433d578e1e39> accessed 1 January 2024.

[6] Matthew Ryder KC, ‘Independent legal review of the governance of biometric data in England and Wales’ (‘The Ryder Review’) (Ada Lovelace Institute, June 2022) [1.3].

[7] Although the focus of the present article, policing is not the only area in which issues around the regulation of LFR arise—see Information Commissioner’s Office, ‘Information Commissioner’s Opinion: The Use of Live Facial Recognition Technology in Public Places’ (18 June 2021) <https://ico.org.uk/media/for-organisations/documents/2619985/ico-opinion-the-use-of-lfr-in-public-places-20210618.pdf> accessed 1 January 2024.

[8] Ryan Calo, ‘Robotics and the Lessons of Cyberlaw’ (2015) 103 Calif. L. Rev. 513, 549.

[9] Ruth B Carter and Gary E Marchant, ‘Principles-Based Regulation and Emerging Technology’ in Gary E Marchant, Braden R Allenby and Joseph R Herket (eds), The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight (Springer 2011) 165.

[10] David R Johnson and David Post, ‘Law and Borders—The Rise of Law in Cyberspace’ (1996) 48 Stan L. Rev. 1367, 1367.

[11] Calo (n 8) 552.

[12] ibid 515.

[13] Meg Leta Jones, ‘Does Technology Drive Law? The Dilemma of Technological Exceptionalism in Cyberlaw’ (2018) Journal of Law, Technology & Policy, 101, 101.

[14] ibid 130.

[15] Jack M Balkin, ‘The Path of Robotics Law’ (2015) 6 Cal L. Rev 45, 45.

[16] ibid 45.

[17] Lucas D Introna and Helen Nissenbaum, ‘Facial Recognition Technology A Survey of Policy and Implementation Issues’ 3 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1437730> accessed 1 January 2024.

[18] See, for example, Vera Lúcia Raposo, ‘The Use of Facial Recognition Technology by Law Enforcement in Europe: A Non-Orwellian Draft Proposal’ (2022) 29 European Journal on Criminal Policy and Research 515; Diego Naranjo, ‘Your face rings a bell: How facial recognition poses a threat for human rights’ <https://repository.gchumanrights.org/server/api/core/bitstreams/61aa23ad-0262-4f46-bcfe-fc74a1d485f3/content> accessed 1 January 2024; Daragh Murray, ‘Facial recognition and the end of human rights as we know them?’ (2024) 4(2) Netherlands Quarterly of Human Rights.

[19] Qingxiu Bu, ‘The global governance on automated facial recognition (AFR): ethical and legal opportunities and privacy challenges’ (2021) 2 International Cybersecurity Law Rev 113, 116-118.

[20] Amy K Lehr and William Crumpler, ‘The Impact of FRT Deployment on Human Rights’ in Facing the Risk: Part 2: Mapping the Human Rights Risks in the Deployment of Facial Recognition Technology (Center for Strategic and International Studies 2021) 10.

[21] Discrimination issues arising from the use of LFR technology have, however, been considered in the context of federal laws and regulations in the United States—see, for example, Rachel S Fleischer, ‘Bias In, Bias Out: Why Legislation Placing Requirements on the Procurement of Commercialized Facial Recognition Technology Must Be Passed to Protect People of Color’ (2020) 50(1) Public Contract Law Journal 63.

[22] Ben Bradford, Julia A Yesberg, Jonathan Jackson, and Paul Dawson, ‘Live Facial Recognition: Trust and Legitimacy as Predictors of Public Support for Police Use of New Technology’ (2020) 60(6) The British Journal of Criminology 1502, 1504.

[23] Peter Fussey and Daragh Murray, ‘Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology’, (University of Essex Human Rights Centre, July 2019) <https://repository.essex.ac.uk/24946/1/London-Met-Police-Trial-of-Facial-Recognition-Tech-Report-2.pdf> accessed 1 January 2024.

[24] ibid 19.

[25] ibid 19. See also ‘Facing the Camera: Good Practice and Guidance for the Police Use of Overt Surveillance Camera Systems Incorporating Facial Recognition Technology to Locate Persons on a Watchlist, in Public Places in England & Wales’ (Surveillance Camera Commissioner, November 2020) [3.42] <https://assets.publishing.service.gov.uk/media/5fc75c5d8fa8f5474a9d3149/6.7024_SCC_Facial_recognition_report_v3_WEB.pdf> accessed 1 January 2024.

[26] ibid 20.

[27] ibid 19.

[28] Kristine Hamman and Rachel Smith, ‘Facial Recognition Technology: Where Will It Take Us?’ (Criminal Justice Magazine, 12 April 2019) <https://www.americanbar.org/groups/criminal_justice/resources/magazine/archive/facial-recognition-technology-where-will-it-take-us/> accessed 1 January 2024.

[29] Irena Nesterova, ‘Mass data gathering and surveillance: the fight against facial recognition technology in the globalized world’ (2020) 74 SHS Web Conf 2.

[30] R (Bridges) v Chief Constable of South Wales and Ors [2019] EWHC 2341 (Admin) [57].

[31] Patrick Grother, Mei Ngan, and Kayee Hanaoka, ‘Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects’ (National Institute of Standards and Technology, December 2019) <https://nvlpubs.nist.gov/nistpubs/ir/2019/nist.ir.8280.pdf> accessed 1 January 2024.

[32] ibid 2.

[33] Lindsey Barrett, ‘Ban facial recognition technologies for children-and for everyone else’ (2020) 26(2) Boston University Journal of Science and Technology Law.

[34] Joe Purshouse and Liz Campbell, ‘Automated facial recognition and policing: A Bridge too far?’ (2021) 42 Legal Studies 209, 209.

[35] Hans J. Morgenthau, ‘The Impartiality of the International Police’ (1968) 21(2) Revista Española de Derecho Internacional 267, 269.

[36] ibid.

[37] ibid.

[38] ibid.

[39] Grother, Ngan, and Hanaoka (n 31) 3.

[40] ibid.

[41] Kyriakos Kotsoglou and Marion Oswald, ‘The long arm of the algorithm? Automated Facial Recognition as evidence and trigger for police intervention’ (2020) 2 Forensic Sci Int Synergy 86, 87.

[42] See, for example, Sir William Macpherson of Cluny, The Stephen Lawrence Inquiry: Report of an Inquiry (Cm 4262-I, 1999) [6.45].

[43] See, for example, Joel Miller, ‘Stop and Search in England: A Reformed Tactic or Business as Usual?’ (2010) 50(5) Brit. J. of Criminology 954, 954.

[44] Robert Reiner, ‘Race and Criminal Justice’ (1989) 16(1) Journal of Ethnic and Migration Studies 5, 5. See also Julia A Yesberg, Arabella Kyprianides, Ben Bradford, Jenna Milani, Paul Quinton, and Oliver Clark-Darby, ‘Race and support for police use of force: findings from the UK’ (2022) 32(7) Policing and Society 878, 878.

[45] Universal Declaration of Human Rights, G.A. Res. 217 A(III), U.N. GAOR, 3d Sess., U.N. Doc. A/810 (1948), Art 2.

[46] Intentional Convention on the Elimination of All Forms of Racial Discrimination (CERD), Art. 1(a), 660 U.N.T.S 195 (1966).

[47] ibid, Art. 2(a), Art. 2(c).

[48] International Covenant on Civil and Political Rights (ICCPR), G.A. Res. 2200A (XXI), UN GAOR, 21st Sess., Supp. No. 16, Art. 4(1).

[49] ibid, Art. 19(3); Human Rights Committee (‘HRC’), General Comment No. 34, Freedoms of opinion and expression [21] (2011).

[50] HRC, General Comment No. 18, Non-discrimination [1] (1989).

[51] ibid.

[52] L.R. v. Slovakia, Communication No. 31/2003, Committee on the Elimination of Racial Discrimination [10.4] (1996).

[53] See, for example, Broeks v. The Netherlands, Communication No. 172/1984, Human Rights Committee, U.N. Doc. CCPR/C/OP/2 [13]: ‘The right to equality before the law and to the equal protection of the law without any discrimination does not make all differences of treatment discriminatory. A differentiation based on reasonable and objective criteria does not amount to prohibited discrimination within the meaning of article 26.1’.

[54] Open Society Justice Initiative, Addressing Ethnic Profiling in Policing (Open Society Initiative 2009) 17.

[55] Ben Bowling and Coretta Phillips, ‘Disproportionate and Discriminatory: Reviewing the Evidence on Police Stop and Search’ (2007) 70(6) MLR 936, 948.

[56] Maria V. Morris, ‘Racial Profiling and International Human Rights Law: Illegal Discrimination in The United States’ (2001) 15 Emory Int’l L. Rev 207, 211.

[57] Michael Shiner, Zoe Carre, Rebekah Delsol and Niamh Eastwood, ‘The Colour of Injustice: ‘Race’, drugs and law enforcement in England and Wales’ (StopWatch, 14 October 2018) iv <https://www.stop-watch.org/what-we-do/research/the-colour-of-injustice-race-drugs-and-law-enforcement-in-england-and-wales/> accessed 1 January 2024.

[58] CERD (n 46), Preamble.

[59] ibid, Art. 2(1)(a).

[60] ibid, Art. 2(1)(c).

[61] ‘General Recommendation No. 31, General Recommendation XXXI on the Prevention Of Racial Discrimination in the Administration and Functioning of the Criminal Justice System’ (Committee on the Elimination of Racial Discrimination, 2005) 20.

[62] General Recommendation No. 13, General Recommendation XIII on the Training of Law Enforcement Officials in the Protection of Human Rights’ (Committee on the Elimination of Racial Discrimination, 1993) 2.

[63] See, for example, the Court of Appeal’s observations in R (Bridges) v Chief Constable of South Wales and Ors [2020] EWCA Civ 1058 [199]. 

[64] Tony Mansfield, ‘Facial Recognition Technology in Law Enforcement Equitability Study: Final Report’ (National Physical Laboratory, March 2023) <https://science.police.uk/site/assets/files/3396/frt-equitability-study_mar2023.pdf> accessed 1 January 2024.

[65] ibid 4.

[66] Bridges (n 63) [201].

[67] Asress Adimi Gikay, ‘Regulating Use by Law Enforcement Authorities of Live Facial Recognition Technology in Public Spaces: An Incremental Approach’ (2023) 82(3) CLJ 414, 431.

[68] The repression of the Uyghur community in Xianjiang represents one of the most notable instances of systematic LFR deployment and is illustrative of the technology’s wide-ranging human rights impacts. Indeed, the UN Office of the High Commissioner for Human Rights have reported on the existence of ‘a sophisticated, large-scale and systematized surveillance system’ in Xinjiang that is ‘driven by an ever-present network of surveillance cameras, including deploying facial recognition capabilities’. See UN Office of the High Commissioner for Human Rights, ‘OHCHR Assessment of Human Rights Concerns in the Xinjiang Uyghur Autonomous Region, People’s Republic of China’ (OHCHR, 31 August 2022) [96] <https://www.ohchr.org/sites/default/files/documents/countries/2022-08-31/22-08-31-final-assesment.pdf> accessed 1 January 2024.

[69] ‘General Recommendation No. 14, General Recommendation XIV on Article 1, Paragraph 1, of The Convention (Committee on the Elimination of Racial Discrimination, 1993) 2.

[70] See: European Convention for the Protection of Human Rights and Fundamental Freedoms, Sept. 3, 1953, art. 14, 213 U.N.T.S 221. Article 14 of the ECHR is materially similar to the anti-discrimination provisions in the ICCPR and CERD. It provides that ‘the enjoyment of the rights and freedoms set forth in this Convention shall be secured without discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status’.

[71] D.H. and Others v. the Czech Republic, App. No. 57325/00, [44] (ECtHR. Feb. 7, 2006).

[72] Molla Sali v Greece, App no. 20452/14, [135] (Eur. Ct. H.R. Dec. 9, 2018); Eweida & Others v United Kingdom, App nos. 48420/10, 59842/10, 51671/10 and 36516/10, [88] (ECtHR. Jan. 15, 2023).

[73] For a comprehensive overview of the development of ‘suspect’ discrimination grounds in ECHR case law, see: Oddný Mjöll Arnardóttir, ‘The Differences that Make a Difference: Recent Developments on the Discrimination Grounds and the Margin of Appreciation under Article 14 of the European Convention on Human Rights’ (2014) 14(4) Hum. Rights Law Rev. 647, 649-652.

[74] The ECtHR originally developed the ‘very weighty reasons’ test in cases involving sex or gender discrimination. See: Abdulaziz, Cabales and Balkandali v United Kingdom, App nos. 9214/80, 9473/81 and 9474/81 [78] (ECtHR, 28 May 1985). Even if not referring expressly to the ‘very weighty reasons’ test as such, the Court has reasoned that racial discrimination is ‘a particularly invidious kind of discrimination’, which ‘in view of its perilous consequences, requires from the authorities special vigilance and a vigorous reaction’: Timishev v. Russia, App nos. 55762/00 and 55974/00 (ECtHR, December 13, 2005) [56].

[75] Lehr and Crumpler (n 20) 10.

[76] Glukhin v. Russia, App no. 11519/20 (ECtHR, 4 July 2023).

[77] ibid [90]-[91]

[78] Monika Zalnieriute, ‘Glukhin v. Russia App. No. 11519/20 Judgment’ (2023) 111(4) American Journal of International Law 695, 696.

[79] CERD (n 46), Art 2(1)(c).

[80] General Recommendation No. 31 (n 61), 5(i).

[81] In 2022, the Ryder Review recommended, inter alia, that there was an ‘urgent need for a new, technologically neutral, statutory framework’ for the governance of biometric data in the United Kingdom. See Ryder (n 6) para 8.2. For a comprehensive overview of ‘potential legal and regulatory approaches in selected jurisdictions’ towards LFR governance, see Bu (n 19) 116.

bottom of page