Skip to main content

Article 22 GDPR and prohibition of discrimination. An outdated provision?

by Davide Baldini

Introduction

Amongst its stated objectives, the European Union General Data Protection Regulation (the “GDPR”)[1]endeavours to protect all the fundamental rights recognised under EU law and which are challenged by the processing of personal data. As such, the GDPR does not exclusively uphold the most obvious rights to respect for private life and to the protection of personal data set forth in art. 7 and 8 of the Charter[2]. This broad aim of the GDPR is expressly laid down in its Art. 1 par. 2[3], and reflects what is already inherent in art. 51 of the Charter, which provides that the latter shall apply when “implementing Union law”; this means that, as the application of the Regulation implements Union law, all Regulation provisions must be interpreted in accordance with and in light of all the rights recognized by the Charter.

This is also reflected by various GDPR provisions, which make apparent that their aim goes far beyond privacy or personal data protection. The most obvious example is the right to data portability, which does not advance in any way the data subject’s right to respect for private life, but has a clear foundation on consumer protection (protected under Art. 38 of the Charter) as it endeavours to reduce switching costs for consumers in the digital environment[4], by empowering individuals to obtain a copy of the personal data provided to the data controller in a “structured, commonly used and machine-readable format” as well as to “transmit those data to another controller”.

Another GDPR provision which upholds non-privacy related rights is Art. 22 GDPR. This provision seeks, amongst other objectives, to prevent algorithmic discrimination and may thus be deemed to be grounded on prohibition of discrimination, which is recognised as a fundamental right of EU law under Art. 21(1) of the Charter[5]. This contribution aims at briefly examining whether this GDPR provision may effectively tackle the occurrence of discrimination in automated (algorithmic)[6]decisions.

When is an algorithmic decision lawful? The structure of art. 22 GDPR

Art. 22(1) GDPR states that “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

The following paragraph lays down three exceptions to this rule; the algorithmic decisions described above are allowed where such decision:

(a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or (c) is based on the data subject’s explicit consent.”

This provision seeks to limit the occurrence of algorithmic decisions which do not include any (meaningful)[7]human intervention; it does so by narrowing the instances in which this kind of personal data processing might be lawful. As such, this norm is not directly concerned with avoiding biased or unfair decisions.

Similarly, Art. 22(3) does not seem to be directly concerned with anti-discrimination issues.[8]

More interestingly from an anti-discrimination perspective, Art. 22(4) GDPR states that “Decisions referred to in par. 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place

Again, the first part of the provision establishes a prima facie prohibition for data controllers: any algorithmic decision which produces legal or similar significant effects on the data subject is generally prohibited, where it is based on the so-called special categories of personal data referred to in Art. 9(1) (perhaps more commonly known as “sensitive data”). It should be noted that these categories of data overlap substantially with the so-called “protected grounds” that form part of EU anti-discrimination law, as reflected in Art. 21(1) of the Charter[9].

It is therefore apparent that Art. 22(4) aims at preventing algorithmic discrimination, as it prevents important algorithmic decisions from being based on data which reveals an individual’s  belonging to a protected ground under anti-discrimination law, except where the narrow exceptions set forth in art. 9(1)(a) or 9(1)(g) apply.[10]

Special categories of personal data are commonly understood as information which directly indicate, by themselves, belonging to a special category. An example could be the records of an individual’s donations to a political party, which directly reveal her political opinion. Under this interpretation, Art. 22(4) seems to rest on the assumption that, by removing such information from the dataset used by the algorithm, discrimination does not occur.

However, it has been correctly observed that this data-purging mechanism falls short from effectively protecting the right to non-discrimination[11]. This issue will be addressed in the next paragraph.

Does Art. 22(4) effectively tackle algorithmic discrimination?

A predictive or machine-learning algorithm[12]might learn how to discriminate[13], even when this outcome is not in the developers’ intentions. In particular, the data used to train the algorithm may be intrinsically biased against one or more protected categories so that, during the automated decision-making phase, there is an unfair treatment of individuals belonging to protected groups. This might happen when correlations are developed between protected categories (e.g. sexual orientation or religious beliefs) and target variables (such as job performance)[14].

More often, though, a correlation might develop between such target variables and “proxy data”, namely data which do not directly reveal belonging to a special category, but which has become correlated, during the training phase of the algorithm, with membership to such a category. For example, an individual’s address of residence may indirectly reveal her ethnic origin, where the address is located in a neighbourhood which is strongly populated by a specific minority.

In these situations, the decision-making outcome for a protected group may become systematically dissimilar from others, even though the dataset processed by the algorithm does not include any special category of personal data, as commonly understood in the interpretation described above: typically, an individual’s address of residence is not considered as a special category of personal data under Art. 9(1).

Therefore, the group is consistently discriminated, even though the prohibition set forth by Art. 22(4) is, arguably, respected.

A proposed remedy: a rights-based interpretation of Art. 22(4) GDPR

As observed in the first paragraph, this provision seeks to prevent algorithmic decisions from having discriminatory effects towards data subjects pertaining to protected categories. However, a literal interpretation of this prohibition would deny it any concrete anti-discriminatory effect, as the mere fact of removing special categories of data from the dataset would not prevent the occurrence of discrimination.

Nevertheless, this contribution argues that, given the fundamental-rights foundation of this provision (and of the GDPR itself), Art. 22(4) should be interpreted as encompassing not only data which immediately reveals belonging to a special category (such as, for example, country of birth, which may directly reveal ethnic origin), but also data which indirectly reveals such belonging in the context of to the algorithmic decision-making process. Under this broader interpretation, this latter type of data should either be excluded from the dataset or processed in a way which diminishes its correlation with special categories, in order to avoid any discriminatory outcome.

This proposed interpretation appears also to be in line with case-law of the Court of Justice of the European Union, which has consistently held that data protection provisions should be interpreted so as to achieve their effectiveness in upholding fundamental rights[15]. The rights-based reasoning of the Court when applying data protection law has been especially frequent after the adoption of the Lisbon Treaty, which has provided the Charter (and the right to data protection enshrined in its Art. 8) with “the same legal value as the Treaties[16], thereby strengthening the fundamental rights aim of the GDPR.

In conclusion, in order to prevent Art. 22(4) from being devoid of any concrete protective effect toward the data subject, it is suggested that, in an algorithmic setting, the definition of “special categories of personal data” should be read in an context-based, teleological and rights-based way, by taking into account the anti-discriminatory aim of the provision.


[1]Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), O.J. L 119 4.5.2016, p. 1–88.

[2]Charter of Fundamental Rights of the European Union, O.J. C 326, 26.10.2012, p. 391–407.

[3] “This Regulation protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data”.

[4]Lynskey O., The Foundations of EU Data Protection Law, Oxford University Press, 2016, p. 263.

[5]Any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited.”

[6]For the purpose of this article, the term “algorithm” identifies “a sequence of commands for a computer to transform an input into an output” (see Fundamental Rights Agency, #BigData: Discrimination in data-supported decision making, 2018, p. 4, available at https://fra.europa.eu/en/publication/2018/big-data-discrimination.

[7]Article 29 Working Party “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679”, WP251rev.01.

[8]In the cases referred to in points (a) and (c) of par. 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision”. In requiring the data controller to “implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests”, it might be argued that this provision requires the data controller to ensure that appropriate mathematical or statistical procedures, as well as technical and organisational measures, are put in place in order to avoid any discriminatory effect, (cfr.recital 71 GDPR). However, fully explore such possibility

[9]Specifically, as regards race, colour, ethnic or social origin, genetic features, religion or belief, political or any other opinion, membership of a national minority, disability, age or sexual orientation.

[10]Respectively, where the processing is based on the data subject’s explicit consent, and when “processing is necessary for reasons of substantial public interest, on the basis of Union or Member State law which shall be proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject”.

[11]See, among others, Goodman B.,Discrimination, Data Sanitisation and Auditing in GDPR, in European Data Protection Law Review, 2016, p. 493 ss., 502.

[12]That is, for the purpose of this article, an algorithm that process data in order to generate models and that applies such models to (other) data.

[13]For example, the Racial Equality Directive (2000/43/EU) gives the following definition of discrimination: “where one person is treated less favourably than another is, has been or would be treated in a comparable situation on grounds of racial or ethnic origin”.

[14]Goodman B., op. cit., p. 499.

[15]Court of Justice of the European Union, Case C-131/12, “Google Spain SL, Google Inc. V Agencia Española de Protección de Datos (AEPD), Mario Costeja González, Google Spain”, ECLI:EU:C:2014:317, par. 58 and case-law cited thereby.

[16]Art. 6(1) of the Treaty on European Union.


Autore:

 

en_US