Dear Project Partners, dear Fellows,

with this second newsletter, we are pleased to continue the series of periodic information on the European judicial training project "FRICoRe".

In this issue you will find:

  1. Updates on the European Network of Judges and Legal Expert
  2. An updated overview of the planned Transnational Training Workshops
  3. A Learn More Box on new technologies

 

1. EUROPEAN NETWORK OF NATIONAL JUDGES AND LEGAL EXPERT

As anticipated in the first newsletter, one of the most ambitious objectives at this stage of the project is to create a European Network of national judges and legal experts, joining stable national teams engaged in the mutual development of the ReJus/FRICoRe Caselaw Database and the project Casebooks by suggesting relevant national judgments concerning the effective protection of fundamental rights in the five selected sectors (consumer protection, migration and asylum, data protection, health law, non-discrimination). It goes without saying that within the context of a EU-wide project like FRICoRe, the direct involvement of national magistrates and academics offers a privileged point of view to thoroughly analyze the judicial dialogue among national and European Courts. Since the project launch in February, more than twenty experts from across the EU joined the Network and started reporting relevant national rulings. In total, the current composition of the Network covers twelve different Member States, bringing together twenty-two members coming from eighteen different institutions. The table below shows more information on the current structure and remains always open to new membership. Do not hesitate to contact us for more details if you are interested in joining.

 

AUSTRIAAustian Supreme Administrative Court 
BELGIUMBelgian Market Court 
CROATIACounty Court of Rijeka 
ESTONIATallin Court of Appeal 
FINLANDSupreme Administrative Court of Finland2 members
FRANCESorbonne School of Law 
FRANCEFrench Court of Cassation 
FRANCEFrench Council of State 
HUNGARY & ROMANIASapientia-Hungarian University of Transylvania, Romania  
GREECEGreek School of Judiciary 
NETHERLANDSUniversity of Groningen3 members
NETHERLANDSDutch Council of State 
ROMANIAUniversity of Masaryk (Czech Republic) 
ROMANIAUniversity of Bucharest2 members
SLOVAKIARegional Court of Trnava 
SPAINSupreme Court of Spain 
SPAIN Firts Instance Civil Court of Barcelona 
SPAINAppeal Court of Huelva 

 

2. UPDATES ON THE TRANSNATIONAL TRAINING WORKSHOPS

After confirming the dates of the first planned workshop on Consumer Protection (Barcelona, February 2020), in this newsletter we are pleased to also inform you that the second training event is scheduled for Thursday 26th and Friday 27th March 2020. The workshop will take place in Warsaw and will be focused on the effective data protection. More information will be provided in conjunction with the publication of the call for participation.

The list below offers an overview of both workshop modules to be held throughout the project (Transnational Training Workshop, TTWs and Transnational “Training the Trainers” Workshops, TTTWs).

 

  • TTW on Consumer Protection → Barcelona, Pompeu Fabra University, 3rd and 4th February 2020
  • TTW on Data Protection → Warsaw, 26th and 27th March 2020
  • TTW on Non-discrimination → Groningen, June 2020
  • TTW on Health and Fundamental Rights → Trento, October 2020
  • TTW on Immigration and Asylum → Paris, January 2021
  • TTW on Cross-sector and horizontal perspective → Coimbra, June 2021

 

  • TTTW 1 → Scandicci (Florence), December 2020
  • TTTW 2 → Barcelona, April 2021
  • TTTW 3 → Groningen, October 2021

 

3. TO LEARN MORE

With this second issue, we want to launch a series of extra content aimed at making this newsletter not only an updating tool but also an opportunity for deepening the knowledge of topics or people worth learning more.

To begin with, we propose a Learn More Box on the possible discriminatory effects related to the use of Artificial Intelligence systems. We hope it will be useful in paving the way for the deepening, throughout FRICoRe, of the legal implications both in the field of non-discrimination and in the one of the relationship between law and new technologies.

Dr Silvio Ranise and Dr Carla Mascia from the Fondazione Bruno Kessler, our project partner, helped us to start the discussion that we intend to develop further throughout the project.


The automated decision systems, based on machine learning algorithms, often lead to discriminatory results. Is it possible to identify the steps of the algorithms in which such problems may arise?

Discriminatory results may arise at different stages of the design process of an automated decision system. Creating such a system is an iterative process, which consists mainly of three steps: data preparation, training of the chosen algorithm to create a model, and the prediction phase. Since the performance of the model depends, among other factors, on the historical data that has been used to train the algorithm, data preparation and training are the pivotal steps for building a fair algorithm. Whereas, in the prediction phase arises the discriminatory behavior of the built model.

Which are the main issues that cause a training data to be unfair?

Computational models typically rely on the assumption that the data faithfully represents the population. When this assumption is not fulfilled, models may lead to discrimination towards certain protected groups. There are several reasons for which data may not be faithful: if forms of social bias are incorporated in the training data or certain protected groups are under-/over-represented as compared to others. Since machine learning algorithms follow pre-existing patterns that persist in society, the model is likely to reproduce the same bias or to lead to incorrect inferences.

What are the most relevant examples of discriminatory results produced by biased algorithms?

In these last years, several cases of algorithms resulting in discrimination have been registered in different areas. Examples include Google’s image recognition algorithm that classified black people as gorillas, Amazon’s job-recruiting engine that discriminates against women, or the algorithm used by St. George’s Hospital, in the United Kingdom, that was shown to have systematically disfavored racial minorities and women with credentials otherwise equal to other applicants’.

It is clear that the social impact of discriminatory algorithms is different depending on the context. One of the most well-known cases is COMPAS, a tool used by some American courts for generating scores designed to gauge the chance of a person committing another crime within two years if released. In 2016, ProPublica published an article by Angwin et al showing how COMPAS discriminates towards black defendants. The journalists point out different levels of accuracy across black and white defendants. In fact, a higher number of black defendants were classified false positives, i.e. people classified as “high risk” but subsequently not charged with another crime, and a higher number of white defendants were classified false negatives, i.e. people classified as “low risk” but subsequently charged with another crime.

Is it possible to assess whether an automated decision system contributes to discrimination? If so, how? If not, what are the main obstacles?

Determining whether or not an algorithm contributes to discrimination is a critical topic right now, both in research field and the political/social debate. In the former, this issue results in an optimization problem that considers several criteria at the same time related to conflicting notions of fairness. In fact, there is no commonly agreed definition of fairness and different disciplines have different perspectives. So, a multi-disciplinary approach, in which different competences work together including technology, law, psychology, social sciences, and ethics, is required to deal with such problems.

Recently, also political parties are approaching the issue, see for instance the bill, called the Algorithm Accountability Act, proposed by congressional Democrats. A regular evaluation of the machine learning algorithms for accuracy, fairness, bias and discrimination would be a step toward ensuring the use of nondiscriminatory machine learning algorithms. Another connected issue that may contribute to social acceptance of machine learning algorithms is their interpretability and explainability, namely why a model returns a specific output, and, in this context, why it leads to unfair behaviors. This is also a hot topic which is currently attracting a lot of efforts in the research community.


We wish you a pleasant summer break and we remain at disposal for clarification, requests or comments.

 

Kind regards,

The FRICoRe Coordination Team

Posted on: 18 July 2019
Newsletter