April 02, 2025
Social Justice
The Oppressive Use of Artificial Intelligence Between Italy, the United Kingdom, and Palestine
Insight by Angelo Boccato
Artificial intelligence is presented in the generalist debate as a new industrial revolution, with the idea that chatbots like ChatGPT and Deep Seek are competing with each other in a race for innovation in AI. But what about the more sinister applications of artificial intelligence, its connections and uses for racial profiling, surveillance and the general invasion of privacy and personal freedom? And what is the situation in Italy, the UK and Palestine, the veritable laboratory of the oppressive use of AI?
Surveillance Italy
The biggest proponent of the use of artificial intelligence for surveillance purposes in the Meloni government is Interior Minister Matteo Piantedosi. For some time, Piantedosi has been trying to push a securitarian agenda in the country, targeting migrants, racialised people and the homeless. One example is the introduction of red zones in Milan and Rome that prohibit the presence and provide for the removal of subjects who ‘determine a concrete danger to public safety’.
According to a note from the Viminale (the Italian Home Office) the red zones should be aimed at individuals with a criminal record for drugs, theft, robbery, crimes against the person or carrying weapons, but the doubt remains that these measures will mainly affect racialised people, the homeless or, more generally, the poor.
‘ Piantedosi says he wants to install facial recognition cameras wherever possible since 2023, but he forgets two things: the EU AI Act, i.e. the first comprehensive legal framework on AI worldwide whose aim is to promote reliable AI in Europe, and the fact that in Italy there is a moratorium until the end of 2025, which prohibits the installation of biometric surveillance tools until 2025. So there is both the AI Act and the moratorium, albeit an imperfect one. In any case, the Privacy Guarantor must be consulted for any use of facial recognition systems,' Philip Di Salvo, post-doctoral researcher at the Institute for Media and Communications Management at the University of St. Gallen and journalist, explains to Voice Over Foundation.
As Di Salvo points out, under the current legal framework, the installation of facial recognition technologies could not take place. It is worth remembering, however, that sixteen facial recognition cameras were installed in Como in 2020, a decision that was later blocked after an investigation by Di Salvo himself, Laura Carrer and Riccardo Coluccini in Wired Italia.
‘The installation of facial recognition cameras is a dream on paper for those who have a security culture and think that these technologies can recognise and arrest anyone with a pinch of artificial intelligence, in the blink of a click. In reality, this is not the case. The technology in question is not so effective and is extremely problematic from a discrimination point of view'.
According to a survey by the European Union Agency for Fundamental Rights “Being Black in the EU. Experiences of people of African Descent”, from 2023, 71% of Afro-descendants or people with a migration background in Italy claim to have been victims of racial profiling at least once.
On 22 October 2024, the European Commission against Racism and Intolerance (ECRI), a body of the Council of Europe, published a report on the respect of minority rights in Italy, harshly criticising Italian institutions for their insufficient action in preventing and combating racism in the country and highlighting widespread racist practices and racial profiling of minorities by law enforcement agencies. The political reaction to the report wasvery harsh and, to date, no action has been taken to implement ECRI's recommendations.
Considering the legal framework and reports on racial profiling in Italy, one wonders what these technologies represent in the hands of law enforcement?
'SARI, the facial recognition system used by the police, was partly stopped by the Privacy Guarantor in 2021 (by blocking the real-time facial recognition functions of this system) but, in reality, the remote function, which can analyse footage of various types and eventually recognise the people portrayed in that footage, is allowed. It is not clear how the police forces have used this system and on the basis of which frames of reference,' adds Di Salvo.
Data on the impact of racial profiling in Italy are almost non-existent, due to the lack of academic literature on the subject, but there are exceptions such as the Yaya Project, launched by the Ferrara-based associations Occhio ai Media and Citizens of the World, and the Coordinamento per Yaya, sponsored by Goldsmiths University and coordinated by Dr Alice Elliot. The Yaya Project created both a database, where experiences of ethnic profiling could be recorded anonymously, and synergies with a British organisation such as Account (based in Hackney, east London), leading to the birth of the Profiling Inside Out project, coordinated by Elliot.
But what is the state of the art of the use of artificial intelligence on these fronts in the UK, given London's reputation as a surveillance city par excellence?
Surveillance in the UK
According to the security company Clarion in 2022, there were 942,556 cameras in London: this number alone is indicative of the surveillance apparatus across the Channel. Then there is the way British police forces reinforce racism with the use of predictive technologies for crime, based on the use of algorithms and data to prevent crimes, as Amnesty International UK reports in its Automated Racism report.
The racist predictive technology systems used by the British police are based on geographical location and racial profiling. Based on the former, geographical areas are monitored where future crimes might occur, which tend to be areas where racialised communities are most prevalent and where stop and search, the police practice of disproportionately stopping young people of African, Caribbean and Asian descent, is then practised.
Amnesty cites, as an example, the use of Risk Terrain Monitoring by the Met, the London police in the boroughs of Lambeth and Southwark since September 2020: Lambeth had the second highest rate of stop and search in the capital between December 2020 and October 2021, while people of black ethnic appearance (black ethnic appearance, according to the label used by the Met) were stopped by the police more than four times as often as white people.
Based on profiling, through artificial intelligence tools, individuals from racialised communities tend to be entered into databases by the police, only to be presented as individuals potentially capable of committing crimes.
In addition to this, there is the UK government's order to the US company Apple allowing security authorities access to encrypted cloud data. It should not be forgotten that the UK has been at the centre of a security perspective since the Edward Snowden case, which revealed the involvement of British and US intelligence services in the collection and storage of large amounts of global digital communications.
At the moment, 50 policy points on AI have been presented by the Starmer government, but there is not much else to make predictions about. From what we can observe, however, it would appear that Prime Minister Keir Starmer wants to turn the UK into a global hub for artificial intelligence.
Looking at the precedents and recent interest shown by Starmer and Home Secretary Yvette Cooper to follow Giorgia Meloni in migration policies and deportation plans in Albania, concerns grow over the government's securitarian use of these tools.
When Chris Kaba, the 22-year-old son of Congolese parents, was shot dead by police in Streatham, south London, on 5 September 2022, the modalities that led to his murder were embedded in the mixture of artificial intelligence and racial profiling.
The car driven by Kabawas identified by an automatic number plate recognition camera, ANPR, in use by the London Metropolitan Police, which linked it to a previous firearms incident, leading the police to stop Kaba even though the car was not registered to him: ANPR cameras are just one of the automation tools used by the British police force.
The officer who shot Kaba was later found not guilty.
But where is all this apparatus really tested? In which country does this sort of securitarian bio-politics begin? The answer is occupied Palestine.
The Palestine laboratory
The case of the contract between Giorgia Meloni's government, the Israeli company Paragon, and the spyware developed by the latter, used against journalists such as the director of Fanpage, Francesco Cancellato, and activists such as Luca Casarini, is indicative of how Israeli technology companies are at the centre of the global security and surveillance apparatus.
‘Paragon was co-founded by former Israeli Prime Minister Ehud Barak. These companies are aware of the way their software is used,’ Australian-German journalist Antony Loewenstein, author of the book and documentary The Palestine Laboratory, tells Voice Over Foundation.
‘There is no regulation for spywares. The European Union has nominally tried to do so, but without success. Spywares should be banned, not regulated. No matter who is in power, it is a bipartisan issue; too many are fearful of tackling the spyware companies and the Israeli government behind them,' Loewenstein adds.
The acquisition, for the sum of $32 billion, by Google's parent company Alphabet of the Israeli cybersecurity group Wiz is indicative of how influential Israeli companies are in the fields of surveillance, cybersecurity and AI.
Wiz was founded by Assaf Rappaport, Ami Luttwak, Roy Reznik, and Yinon Costica in 2020, former members of Unit8200, an Israeli Intelligence Corps unit that monitors Palestinians and uses AI to select targets in Gaza.
In the book and documentary The Palestinian Laboratory (the latter directed by Dan Davies), Loewenstein shows how Israel's security and surveillance apparatus, among the global market leaders on this front, is tested on Palestinians before being exported globally.
What does this mean in the daily lives of Palestinians in the Occupied Territories?
‘As Palestinians, we know we are under surveillance. I know my phone is not secure and none of us know to what extent the Israeli authorities can access our information. Normally, we try to be cautious, especially if we cross a checkpoint where soldiers often check our phones. If you have posted something they do not like, you know that you can end up being roughed up. We know that we cannot talk freely among ourselves on the phone because someone else might be listening,' Sara, a Palestinian journalist, who prefers to remain anonymous as a matter of safety, tells Voice Over Foundation.
‘We don't know the level of listening they can reach and we know that the cameras are not just there to record our movements, they are also there to perform actions as they are connected to artificial intelligence systems. If the camera were to identify me as a terrorist, it would not only record my movements, but I could be killed in a second by the weapons connected to that artificial intelligence system, without any human interference. When we are stopped at checkpoints, I realised that the army uses AI systems like Blue Wolf and Red Wolf that identify and classify your face, determining whether you are dangerous or not, according to their parameters. All this made me realise how dystopian life is in Palestine,' the journalist concludes.
In spite of the narratives that present AI as neutral, the level of dystopia that is reached in Palestine under Israeli occupation should also serve as a warning to the countries, but especially to the citizens of the ‘Global North’, where security is used as a pretext to target mainly racialised people, the most vulnerable and poor groups.