
Study highlights risks of facial recognition technologies
Smile! Your face is not only being filmed, but also classified, compared and identified, mainly by public security agencies. Most of the time without your knowledge. This is what a study by the Public Defender’s Office of the Union (DPU) in partnership with the Center for Security and Citizenship Studies (CESeC), an academic institution linked to the Candido Mendes University in Rio de Janeiro, shows.
Released on Wednesday (7), the report Mapping Biometric Surveillance points out that, after hosting the World Cup in 2014, Brazil became a vast field of digital surveillance where so-called Facial Recognition Technologies (TRFs) found fertile ground to spread. Thanks, in part, to the promise of facilitating the identification of criminals and the location of missing persons.
“Facial recognition has been widely incorporated by public bodies in Brazil, in a process that began with the holding of mega-events in the country – especially the Football World Cup in 2014 and the Olympic Games in 2016”, argue the federal public defenders of the DPU and members of CESeC, referring to the sophisticated and expensive facial recognition cameras, increasingly present in the urban landscape.
According to researchers, in April of this year there were at least 376 active facial recognition projects in Brazil. Together, these projects have the potential to monitor almost 83 million people, equivalent to about 40% of the Brazilian population . And they have already moved at least R$160 million in public investments – a value calculated based on the information that 23 of the 27 federative units provided to those responsible for the study – did not respond to the survey, conducted between July and December 2024, Amazonas, Maranhão, Paraíba and Sergipe.
“Despite this entire scenario, regulatory solutions are delayed,” argue researchers from DPU and CESeC, ensuring that Brazil still does not have laws to regulate the use of digital surveillance systems, in particular facial recognition cameras.
Furthermore, experts say there is a lack of external control mechanisms, uniform technical-operational standards and transparency in the implementation of systems. This increases the chances of serious errors, privacy violations, discrimination and misuse of public resources.
Errors
In another survey, CESeC mapped 24 cases that occurred between 2019 and April 2025, in which it claims to have identified flaws in facial recognition systems. The best-known of these is that of personal trainer João Antônio Trindade Bastos, 23 years old.
In April 2024, military police removed Bastos from the stands of the Lourival Batista Stadium in Aracaju (SE) during the final match of the Sergipe Championship. They took the young man to a room, where they searched him roughly. Only after checking all of Bastos’ documentation, who had to answer several questions to prove that he was who he said he was, did the police officers reveal that the facial recognition system implemented in the stadium had mistaken him for a fugitive.
Outraged, Bastos took to social media to speak out against the injustice he had suffered. The repercussions of the case led the government of Sergipe to suspend the use of the technology by the Military Police – which, according to news reports at the time, had already used it to detain more than ten people.
Bastos is black. Like most people identified by surveillance and facial recognition systems in Brazil and other countries – according to the report by the DPU and CESeC, there are indicators that 70% of the world’s police forces have access to some type of TRF and that 60% of countries have facial recognition in airports. In Brazil, “more than half of the police stops motivated by facial recognition resulted in mistaken identifications, highlighting the risk of undue arrests”.
“Concerns about the use of these technologies are not unfounded,” the experts warn, citing international research that shows that in some cases, error rates in systems are “disproportionately high for certain population groups, being ten to 100 times higher for black, indigenous and Asian people compared to white individuals.” This finding prompted the European Parliament to warn in 2021 that “[t]he technical inaccuracies of Artificial Intelligence [AI] systems designed for remote biometric identification of individuals may lead to biased results and have discriminatory effects.”
Legislation
When addressing “institutional and regulatory challenges,” the researchers point out that in December 2024, the Senate approved Bill No. 2338/2023, which seeks to regulate the use of artificial intelligence, including biometric systems in public security. To become law, the proposal will have to be approved by the Chamber of Deputies, which last month created a special committee to debate the issue.
Furthermore, for researchers from DPU and CESeC, although the PL proposes to prohibit the use of remote and real-time biometric identification systems in public spaces, the text approved by the Senate provides for so many exceptions that, in practice, it functions “as a broad authorization for the implementation” of these systems.
“The categories of permissions [in the approved text] include criminal investigations, flagrant crimes, searches for missing persons and recapture of fugitives, situations that cover a considerable spectrum of public security activities. Considering the history of abuses and the lack of effective control mechanisms, this openness to use ends up maintaining the possibility of a surveillance state and violation of rights.”
Recommendations
The researchers conclude by defending the urgency of a “qualified public debate”, with the active participation of civil society, members of academia and representatives of public control bodies and international organizations.
They also recommend what they call “urgent measures,” such as passing a specific national law to regulate the use of the technology; standardizing protocols that respect due process; and conducting regular independent audits.
Experts also point out the need for public agencies to provide greater transparency to contracts and databases used, ensuring the public has access to clear information about facial recognition systems and training public agents who deal with the issue. They also suggest that prior judicial authorization be required for the use of information obtained through the use of TRFs in investigations, as well as a time limit for the storage of biometric data and strengthening control over private companies that operate these systems.
“We hope that these findings can not only guide and support the processing of Bill 2338 in the Chamber of Deputies, but also serve as a warning for regulatory and control bodies to be aware of what is happening in Brazil. The report highlights both racial biases in the use of technology and problems of misuse of public resources and lack of transparency in its implementation,” says CESeC general coordinator Pablo Nunes in a statement.