LeADS Working Paper Series Part III: Transparency and Relevancy of Direct-To-Consumer Genetic Testing Privacy and Consent Policies in the EU
Transparency and Relevancy of Direct-To-Consumer Genetic Testing Privacy and Consent Policies in the EU

Xengie Doan and Fatma Dogan participated in the WOPA, ESR 9 and 8 respectively. Xengie is working on collective dynamic consent for genetic data and was interesting in exploring the WOPA topic to better understand the current state of publicly available information from popular direct-to-consumer genetic test companies. Of the information given, how transparent are the data processing activities, the communication about risks and benefits (including collective implications, e.g. risks and benefits also affect family members), and was it framed in a way that enabled potential customers to know their rights? These rights are
granted by the company policies, and by EU regulations such as the GDPR. While these companies may be global or serve multiple countries, for EU countries or residents they must respect EU regulations. This coincides with Fatma’s legal expertise and interest in health data sharing in the EU. This WOPA is related to the LeADS crossroads, inspired by concepts such as trust and transparency, user empowerment, and more. However, it is not directly related to any previous work with the crossroads SOTAs. This work contributes to a better understanding of how such companies operate, what information they deem important to share (for legal and customer empowerment reasons), and we offer suggestions for more user-centred, collective, and transparent policies.
Abstract of the Working Paper
The direct-to-consumer (DTC) genetic testing market in Europe is expected to grow to more than 2.7 billion USD by 2032. Though the service offers ancestry and wellness information from one’s own home, it comes with privacy issues such as the non-transparent sharing of highly sensitive data with third parties. While the GDPR states transparency requirements, in practice they may be confusing to follow and fail to upload the goals of transparency – for individuals to understand their data processing and exercise their rights in a user-centered manner. Thus, we examined six large DTC genetic companies’ public privacy and consent policies and identified information flows using a contextual integrity approach to answer our research questions 1) How vague, confusing, or complete are information flows?; 2) How aligned with GDPR transparency requirements are existing information flows?; 3) How relevant is the information to users?; 4) What risk/benefit information is available? This study identified 59 public information flows regarding genetic data and found that 69% were vague and 37% were confusing regarding transfers of genetic data, consequently GDPR transparency requirements may not be met. Additionally, companies lack public user-relevant information, such as the shared risks of sharing genetic data. We then discuss user-centered and contextual privacy suggestions to enhance the transparency of public privacy and consent policies and suggest the use of such a contextual integrity analysis as a governance practice to assess internal practices.







Robert Lee Poe (ESR 14) and Soumia Zohra El Mestari (ESR 15) authored “Borders Between Unfair Commercial Practices and Discrimination in Using Data.” Robert and Soumia, having initially investigated algorithmic fairness/discrimination in their Crossroad “Trust in Data Processing and Algorithmic Design,” narrowed the WOPA subject matter to an in-depth analysis of particular fair machine learning strategies used in practice for purportedly ensuring non-discrimination/fairness in automated decision-making systems. The intersection of algorithmic unfairness and non-discrimination law is the focal point of Robert’s Ph.D. research, specifically the legality of using fair machine learning techniques in automated decisions from both a European Union and United States legal perspective (hiring, admissions, loan decisions, etc.). Soumia’s Ph.D. research focuses on the
implementation of privacy-preserving techniques as constraints to be enforced to achieve trustworthy processing in complex machine learning pipelines, where she also investigates the gap between data protection legislation and trustworthy machine learning implementations, and how the different components of trustworthiness such as privacy, robustness, and fairness interact. The study of the dynamics of these interactions offers a better understanding of how a trustworthy machine learning pipeline should be implemented, exposed as a service, and interpreted under the different legal instruments. The WOPA focuses on studying one type of those interactions namely: the robustness (measured as accuracy) and fairness (measured as group similarity) and how the focus on one of those two components affects the other under different data distributions. The main contribution of the WOPA is the clarity provided by the conceptual and empirical understanding of the trade-off between statistically accurate outcomes (robust) and group similar outcomes (fair). While that distinction is not a legal one, it has many implications for non-discrimination law, and further research in that direction is needed, with specific suggestions being given in the conclusion of the article.
in data-driven societies that cannot be viewed and fully grasped in isolation but are instead fully interconnected.