Tribune. The European Parliament voted on Wednesday, October 6 a resolution on artificial intelligence (AI) in criminal law and its use by police and judicial authorities in criminal cases, which according to its rapporteur Petar Vitanov, calls for “A moratorium on the deployment of facial recognition systems for repressive purposes, these technologies having demonstrated their ineffectiveness and often led to discriminating results”.
Such a moratorium would have the effect of limiting the experimentation and development of AI systems reconciling efficiency and respect for freedoms. However, in order to bring about such European solutions, it is better to experiment than to prohibit. In a resolution adopted by 377 votes in favor, 248 against and 62 abstentions, the European Parliament affirms a strong caution regarding the use of AI in the field of security, marked by the desire to ban private databases facial recognition, predictive policing based on behavioral data, or citizen social rating systems.
The Parliament takes note of the limits of certain AI software, which have not been proven and are not yet fully operational. It also takes up the criticism, widely documented by research and NGOs, in particular in the United States, according to which the use of these new technologies would not make it possible to achieve a neutrality of security action, but would on the contrary lead to biases (racial, sexist, etc.).
A blockage to innovation
All these elements are correct and should help to found a European position marked by a demand for strong guarantees and the ban on AI systems incompatible with our values and our rules. As the European Commission underlines in its proposal for a regulation of April 21, 2021 establishing harmonized rules on AI, research, experimentation and innovation are necessary to bring about the emergence of trustworthy European AI solutions.
However, by calling for a moratorium, the European Parliament is blocking the path to innovation. This is a pitfall that makes a pragmatic analysis impossible, since we refrain from testing the effectiveness of the incriminated devices. It is also a way of staying away from any progress that could be made in a booming field that innovates daily to push back technical limits.
Finally, it means making oneself vulnerable by depriving oneself of strategic autonomy: this postponement transfers the responsibility for the development and use of these technologies to rival powers or foreign private companies. Postponing the development of AI technologies means accepting to become dependent in the future.
You have 52.85% of this article to read. The rest is for subscribers only.
We would like to thank the writer of this article for this amazing web content
“Postponing the development of artificial intelligence technologies means accepting to become dependent in the future”