Posts

Data Privacy in the Financial and Industrial Sectors

Nowadays, one of the most important issues for enterprises across the financial services industry is privacy and data protection. Records and in particular financial records are considered sensitive for most of the consumers and good data handling practices are promoted by the respective regulators targeting for increased customer profiling in order to identify any potential opportunities and make a risk management analysis. To this extend, the management of data privacy and data protection is of great importance throughout the customer cycle. For example, there are several use cases in the finance sector that involve sharing of data across different organizations (e.g., sharing of customer data for customer protection or faster KYC, sharing of businesses’ data for improved credit risk assessment, sharing of customer insurance data for faster claims management and more).

To facilitate such cases, several EU funded projects have already discussed the need to reconsider data usage and regulation in order to unlock the value of data while fostering consumer trust and protecting fundamental rights. Permissioned blockchain infrastructure is utilized in order to provide privacy control, auditability, secure data sharing, as well as faster operations. The core of the blockchain infrastructure is enhanced in two directions: (i) Integration of tokenization features and relevant cryptography, as a means of enabling assets trading (e.g., personal data trading) through the platform; and (ii) utilization of Multi-Party Computation (MPC) and Linear Secret Sharing (LSS) algorithms in order to enable querying of encrypted data as a means of offering higher data privacy guarantees. Based on these enhancements the project will enable the implementation disruptive business models for personalization, such as personal data markets.

LeADS builds upon those results and steps forward by setting a more ambitious goal: to experiment, in partnership with businesses and regulators, on a way to pursue not only lawfulness of data mining and AI development, but both the amplest protection for fundamental rights and, simultaneously, the largest possible data exploitation in the digital economy using coexisting characteristics of the data driven financial services, LeADS helps to define: Trust; Involving; Empowering; Sharing. Participation in several of the mentioned projects (e.g. XAI, SoBigData++) and/or close scientific connections with the research teams (e.g. CompuLaw) by several consortium members ensure close collaboration with the named projects. There are great potentials to be found in data science and AI development entailing both great risks in terms of privacy and industrial data protection. Even considering legal novelties like: the Digital Single Market strategy, the GDPR, the Network and Information Security (NIS) directive, the e-privacy directive and the new e- privacy regulation, legal answers are often regarded as inadequate compromises, where individual interests are not really protected. As far it concerns the academic elaboration with the subject, there are several challenges that still need to be addressed in data-driven financial services that LeADS could met regarding the empowerment of individuals (users, clients, stakeholders, etc.) in their data processing, through “data protection rights” or “by design” technologies, like for example blockchain as described before.

The approach LeADS Early-Stage Researchers will be developed in two folds: 1) The study of digital innovation and business models (e.g. multisided markets, freemium) dependent on the collection and use of data in the financial sector. It will also: a) link this analysis to the exploration of online behaviour and reactions of users to different types of recommendations (i.e. personalized recommendations by financial/industrial applications) that generate additional data as well as large network effects; b) assess (efficiency, impact studies) the many specific privacy regulations that apply to online platforms, business models, and behaviours, and 2) Proposal of a user centric data valorisation scheme by analysing user-centric patterns, the project aims to: a) Identify alternative schemes to data concentration, to place the user at the heart of control and economic valorisation of “his” data, whether personal or not (VRM platforms, personal cloud, private open data); b) Assess the economic impact of these new schemes, their efficiency, and the legal dimension at stake in terms of liability and respect of privacy. The project will also suggest new models allowing the user to obtain results regarding the explainability of the algorithms that are being utilized by financial organizations to provide the aforementioned personalized recommendations for their offerings. LeADS research will overcome contrasting views that consider privacy as either a fundamental right or a commodity. It will enable clear distinctions between notions of privacy that relate to data as an asset and those which relate to personal information affecting fundamental rights.

Against this background, LeADS innovative theoretical model, based on new concepts such as “Un- anonymity” and “Data Privaticity”, will be assessed within several legal domains (e.g. consumer sales and financial services, information society contracts, etc.) and in tight connection with actual business practices and models and the software they use. Finally, due to the increasing potential of Artificial Intelligence information processing, a fully renewed approach to data protection and data exploitation is introduced by LeADS by building a new paradigm for information and privacy as a framework that will empower individuals’ awareness in the data economy; wherein data is constantly gathered and processed without awareness, and the potential for discrimination is hidden in the design of the algorithms used. Thus, LeADS will set the theoretical framework and the practical implementation template of financial smart models for co- processing and joint-controlling information, thereby answering the specific need to clarify and operationalize these newly- introduced notions in the GDPR.

Technical and legal aspects of privacy-preserving services: the case of health data

Nowadays, the potential usefulness as well as the value of health data are broadly recognized. They may transform traditional medicine into clinical science intertwined with data research, driving innovation and producing value from the perspective of the key stakeholders of the health care ecosystem: not only patients but also health care providers and the life insurance sector.

Yet, the health data does not appear out of thin air, it is not a product that can be viewed in isolation. It is:

  • the personal data related to the physical or mental health of a natural person, including the provision of health care services, which reveal information about his or her health status (data concerning health),
  • the personal data relating to the inherited or acquired genetic characteristics of a natural person which give unique information about the physiology or the health of that natural person and which result, in particular, from an analysis of a biological sample from the natural person in question (genetic data),
  • the personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data (biometric data).

Thus, the individual cannot be deprived of the right to decide about their processing as the health issues are at the very centre of the privacy protection sphere.

It becomes clear that balancing the interests of the private individual whose privacy is protected, interests of other private and public actors, and general common interests is highly problematic. Naturally, processing of the health data cannot be unrestricted: optimally, the legal framework should facilitate unlocking the value of health data for European citizens and businesses and empower users in the management of their own health data without undermining the very essence of the right to privacy.

Currently, processing of health data falls under complex GDPR legal regime. This, however, poses a serious challenge for the data processors on the one hand and, on the other, gives rise to numerous legal questions. What are the grounds for processing such data in this highly differentiated context?  How should medical data be protected both on the regulatory and technological level? How can we harness newest technology to increase data safety? How can anonymization and/or privacy-preserving data management techniques using efficient cryptography (e.g. homomorphic, secure multi-party computations) contribute to reaching higher protection levels without becoming a hurdle or an impediment for legitimate data processing? Can the blockchain technologies be used for health information exchange? Should the creation of technological infrastructure be coupled with establishing proper key management schemes?

The task is twofold. First, on the regulatory level general policy guidelines for legislators, independent agencies, businesses on data sharing platforms are necessary, together with the analysis of the policy and market implications of providing privacy-preserving services. Second, the practical recommendations are needed: specific postulates should be formulated on how data protection techniques can be applied in the health domain, in order to contribute to achieving the abovementioned aims.

Author: dr. Katarzyna Południak-Gierz, Jagiellonian University

The beginning of the LeADS era

On January 1st 2021 LeADS (Legality Attentive Data Scientists) started its journey. A Consortium of 7 prominent European universities and research centres along with 6 important industrial partners and 2 Supervisory Authorities is exploring ways to create a new generation of LEgality Attentive Data Scientists while investigating the interplay between and across many sciences.

LeADS envisages a research and training programme that will blend ground-breaking applied research and pragmatic problem-solving from the involved industries, regulators, and policy makers. The skills produced by LeADS and tested by the ESR will be able to tackle the confusion created by the blurred borders between personal and commercial information and between personality and property rights typical of the big data environment. Both processes constitute a silent revolution—developed by new digital business models, industrial standards, and customs—that is already embedded in soft law instruments (such as stakeholders’ agreements) and emerging in case law and legislation (Regulation EU 2016/679 and the e-privacy directive to begin with), while data scientists are mostly unaware of them. They cut across the emergence of the Digital Transformation, and call for a more comprehensive and innovative regulatory framework. Against this background, LeADS is animated by the idea that in the digital economy data protection holds the keys for both protecting fundamental rights and fostering the kind of competition that will sustain the growth and “completion” of the “Digital Single Market” and the competitive ability of European businesses outside the EU. Under LeADS, the General Data Protection Regulation (GDPR) and other EU rules will dictate the transnational standard for the global data economy while training researchers able to drive the process and set an example

The data economy or better way the data society we increasingly live is our explorative target under many angles (from the technological to the legal and ethics one). This new generation is needed to better answer to the challenges of the data economy and the unfolding of the digital transformation scoping. Our Early Stage Researchers (ESRs) will come from many experiences and backgrounds (law, computer science, economics, statistics, management, engineering, policy studies, and mathematics,..).

ESRs will find an enthusiastic transnational, interdisciplinary team of teams tackling the relevant issues from their many angles. Their research will be supported by these research teams in setting the theoretical framework and the practical implementation template of a common language.

LeADS research plan, although already envisages 15 specific topics to be interdisciplinary investigated, remain open-ended.

It is natural in the fields we have selected for which we identified crossover concepts in need of a common understanding of concepts useful for future researchers, policy makers, software developers, lawyers and market actors.

LeADS research strives to create, share cross disciplinary languages and integrate the respective background domain knowledge of its participants in one shared idiolect that it wants to share with a wider audience.

It is LeADS understanding that regulatory issues in data science and AI development and deployment are often perceived (and sometimes are) hurdles to innovation, markets and above all research. Our unwritten goal is to contribute to turn regulatory and ethical constraints which are needed in opportunities for better developments.

LADS aims at nurturing a data science capable of maintaining its innovative solutions within the borders of law – by design and by default – and of helping expand the legal frontiers in line with innovation needs, preventing the enactments of legal rules technologically unattainable.

By Giovanni Comandé