Mon. Jul 15th, 2024



The aim of this report is to establish a problematised overview of what we know about what is currently being done in Europe when it comes to remote biometric identification (RBI), and to assess in which cases we could potentially fall into forms of biometric mass surveillance.

Private and public actors are increasingly deploying “smart surveillance” solutions including RBI technologies which, if left unchecked- ed, could become biometric mass surveillance. Facial recognition technology has been the most discussed of the RBI technologies. However, there seems to be little understanding of the ways in which this technology might be applied and the potential impact of such a broad range of applications on the fundamental rights of European citizens.

The development of RBI systems by authoritarian regimes which may subsequently be exported to and used within Europe is of concern. Not only as it pertains to the deployments of such technologies but also the lack of adequate insight into the privacy practices of the companies supplying the systems.

Four main positions have emerged among political actors with regard to the deployments of RBI technologies and their potential impact

on fundamental rights: 1) active promotion 2) support with safeguards; 3) moratorium and 4) outright ban.


The current market of RBI systems is overwhelmingly dominated by image-based pro- ducts, at the centre of which is facial recognition technology (FRT). Other products such as face detection and person detection technologies are also in use.

FRT is typically being deployed to perform two types of searches: cooperative searches for verification and/ or authentication purposes, and non-cooperative searches to identify a data subject. The former involves voluntary consent from the data subject to capture their image, while the latter may not.

Live facial recognition is currently the most controversial deployment of FRT: Live video feeds are used to generate snapshots of individuals and then match them against a database of known individuals – the “watchlist”.

Other RBI technologies are being deployed though their use at present is marginal compared to FRT, these include gait (movement), audio, and emotion recognition technologies, amongst others.

A better understanding of the technical components and possible usage applications of

image-based RBI technologies is needed in order to assess their potential political implications.

RBI technologies are subject to technical challenges and limitations which should be considered in any broader analysis of their ethical, legal, and political implications.


Current deployments of RBI technologies within Europe are primarily experimental and localised. However, the technology coexists with a broad range of algorithmic processing of security images being carried out on a scale that ranges from the individual level to what could be classed as biometric mass surveillance. Distinguishing the various characteristics of these deployments is not only important to inform the public debate, but also helps to focus the discussion on the most problematic uses of the technologies.

Image and sound-based security applications being used for authentication purposes do not currently pose a risk for biometric mass surveillance. However, it should be noted that an alteration to the legal framework could increase the risk of them being deployed for biometric mass surveillance especially as many of the databases being used contain millions of data subjects.

In addition to authentication, image and sound-based security applications are being deployed for surveillance. Surveillance applications include the deployment of RBI in public spaces.

Progress on two fronts makes the development of biometric mass surveillance more than a remote possibility. Firstly, the current creation and/or upgrading of biometric databases being used in civil and criminal registries. Secondly, the repeated piloting of live-feed systems connected to remote facial and biometric information search and recognition algorithms.


The use of biometric tools for law enforcement purposes in public spaces raises a key issue of the legal permissibility in relation to the collection, retention and processing of data when considering the individual’s fundamental rights to privacy and personal data protection. When viewed through this lens, RBI technologies could have a grave impact on the exercise of a range of fundamental rights.

The deployment of biometric surveillance in public spaces must be subject to strict scrutiny in order to avoid circumstances that could lead to mass surveillance. This includes targeted surveillance which has the potential for indiscriminate collection of data on any persons present in the surveilled location, not only that of the target data subject.

The normative legal framework for conducting biometric surveillance in public spaces can be found in the EU secondary legislation on data protection (GDPR and LED). The use of biometric data under this framework must be reviewed in light of the protection offered by fundamental rights.

The European Commission’s April 2021 proposal on the Regulation for the Artificial Intelligence Act aims to harmonize regulatory rules for the Member States on AI-based systems. The Proposed Regulation lays out rules focused on three categories of risks (unacceptable, high, and low/ minimal risk) and anticipates covering the use of RBI systems. It also aims to complement the rules and obligations set out in the GDPR and LED.


Four main positions on RBI systems have emerged among political actors as a result of both technical developments in the field and early legislative activity of EU institutions: 1) active promotion 2) support with safeguards; 3) moratorium and 4) outright ban.

Those who are in favour of support with safeguards argue that the deployment RBI technologies should be strictly monitored because of the potential risks they pose, including the potential danger of FRT, for example, to contribute to the further criminalisation or stigmatisation of groups of people who already face discrimination.

The European Parliament passed a resolution on artificial intelligence in January 2020 in which they invite the Commission “to assess the consequences of a moratorium on the use of facial recognition systems”. If deemed necessary, such a moratorium could impact some existing uses of FRT including its deployment in public spaces by public authorities.

A number of EU and national NGOs have called for an outright ban on the use of RBI with some arguing that the mass processing of biometric data from public spaces creates a serious risk of mass surveillance that infringes on fundamental rights.

The European Commission’s legislative proposal for an Artificial Intelligence Act (EC 2021) is both a proposal for a regulatory framework on AI and a revised coordinated plan to support innovation. One feature of the act is the establishment of risk-dependent restrictions which would apply to the various uses of AI systems.

Full version & source – Greens/EFA:

Forward to your friends