The EU Data Act: Towards a data-sharing economy?

Author: Barbara da Rosa Lazarotto

 

There has never been so much data available about individuals. In 2025, the value of the data economy in the EU will be comparable to the GDP of the Netherlands [1]. IoT companies hold large amounts of non-personal data from customers that at first glance might look useless, yet they hold huge relevance.

In the current world, it is almost impossible for an individual not to have any digital footprint in any data-related company, thus we can affirm that data is a form of capital[2] and that we live in a “data economy”. Following this movement, the EU is currently working on a data strategy which consists of several acts such as the Data Governance Act[3] and the most recently proposed Data Act. Today I will focus on the recently proposed Data Act[4].

The Data Act is founded on the premise that data is the lifeblood of economic development[5] and it aims to clarify who can access and use data by removing barriers and providing a safe environment for data sharing.

Currently, we are in the midst of a technological revolution where ordinary tools such as a coffee maker and a vacuum can generate data. Thus, the Data Act aims to determine that consumers can access data generated by these everyday devices and oblige companies to share them with other companies for better use. This will generate innovation, job creation and especially benefit small businesses.

The Data Act also aims to avoid unfair competition when it comes to data sharing by creating a fairness test that prohibits companies from unilaterally imposing unfair contractual clauses related to data sharing. This tool ensures fairness in the allocation of data value among the actors of the data economy.

The Act also addresses my research subject which is public-private data sharing. On this point, in Article 14 the Act states that companies must make data available to the public sector bodies in certain circumstances e.g. in case of emergencies and other exceptional needs. In this context, there is a very good example of data sharing in case of emergencies that could easily have fit under the Data Act provisions.

In August 2005 Hurricane Katrina struck the southeastern United States leaving a widespread of death and damage. Efforts to recover the New Orleans area have gained a powerful ally with the use of data sharing. “Valassis Communications” is a company that mails promotional circulars to virtually every house in the United States. Using this colossal database volunteers were able to apply funds more efficiently and help directly to the individuals who needed it without spending time with house-to-house surveys. Additionally, the nonprofit independent former “Greater New Orleans Data Center”, was able to track the city’s repopulation block by block. [6] Using this data, the Data Center produced several reports such as geographies of poverty, housing developments and abandonment, trace life expectancy and others. These reports can be a great aid to the government to place the efforts to the ones who most need assistance. [7] According to the Data Center, the data available to government officials at the time of the hurricane was an outdated census of New Orleans of the 2000s, thus the data provided by the private company Valassis therefore exemplifies the potential benefits of public-private data sharing in case of emergencies.

The Data Act also is supposed to make it easier for customers to switch between cloud services providers, determining that these providers must ensure easy switching conditions for customers.

Last but not least, the Data Act points out the importance of standardisation and semantic interoperability to data sharing and the formation of a single market of data. Thus, Article 28 refers to several essential requirements that must be complied with to ensure interoperability, e.g. data methodology, data quality, data formats and taxonomies.

With this, it is possible to observe that The Data Act is a good first step forward in the responsible and effective use of data, which will be able to create job opportunities and push the economy with new types of services.

 

 

[1] https://www.inlinepolicy.com/blog/eu-data-act

[2] Jathan Sadowski, ‘When Data Is Capital: Datafication, Accumulation, and Extraction’ (2019) 6 Big Data & Society 205395171882054.

[3] https://eur-lex.europa.eu/legalcontent/EN/TXT/?uri=CELEX%3A52020PC0767

[4] https://digital-strategy.ec.europa.eu/en/library/data-act-proposalregulation harmonised-rules-fair-access-and-use-data

[5] COM (2020) 66 final p.2

[6] https://spectrevision.net/2008/08/22/junk-mail-pings-new-orleans/

[7] https://www.datacenterresearch.org/maps/reference-maps/

 

 

 

Participation of Tommaso Crepax at the PrivacyCamp22

Tommaso Crepax participated at the PrivacyCamp22 with a panel ”Regulation vs. Governance: Who is marginalised, is “privacy” the right focus, and where do privacy tools clash with platform governance”.

Click here for the Session description

Click here for the Session recording

For all the panel summaries and recordings of the Conference

Summary and Video of the Awareness Conference

Legality Attentive AI: Awareness Conference on Explainability of AI

28th of January, 2022

Webinar organised by LeADS in collaboration with the Brussels Privacy Hub

VIDEO

 

Summary of the Conference authored by ESR Robert Poe

For Privacy Day 2022, LeADS (Legality Attentive Data Scientists) and the Brussels Privacy Hub collaborated on the Awareness Conference on the Explainability of AI. Together, the group put on a panel of distinguished speakers: Paul Nemitz of the European Commission; Catelijne Muller, President of ALLAI, EESC, OECD for AI, and HLEG on AI; Dafna Feinholz of UNESO; Riccardo Masucci of Intel; and Fosca Giannotti of Scoula Normale Superiore and CNR.

 

From the start, meaningful debate arose. And, until the last word, each speaker expressed themselves seriously and eloquently.

 

Dafna Feinholz spoke both of the great benefits and risks of AI and of the recent UNESCO Recommendations on the Ethics of AI (Nov. 2021). The Recommendations are admirable, advocating from design to deployment, an ethical approach benefiting all actors involved in an AI projects lifecycle.

 

Paul Nemitz marked the recent change of direction by the EU, from a focus on ethics to an establishment of legislation. Paul stressed that, in his opinion, these Codes of Conduct (professional ethics) were created to defend companies against regulatory action. Further, he argued that we need binding rules to have fair competition in the EU, and that companies should not be allowed to wash their hands of responsibility for artificial intelligence systems when they have released them in the marketplace.

 

Catelijne Muller thoughtfully rejected the commonly held belief that regulation would stifle innovation, saying, “First of all, they don’t, they promote innovation because they level the playing field.” She added that regulations do not only give much needed legal certainty to corporate actors, but regulations also produce standards that will push companies to develop more sustainable and worthwhile AI systems. Catelijne continued by asking the audience to keep in mind the limited capabilities of AI systems today. She ended with a hopeful legal remark on explainability: where a human is already required by law to explain something, the AI is bound as well.

 

Riccardo Masucci celebrated the consensus the EU has built around the general ethical principles that should guide the development of AI but lamented that convergence on technical solutions has not yet happened. He added that future investments must be put into standardization.

 

Fosca Giannotti, coming from a technical background, enthusiastically welcomed the responsibility placed on developers of AI, arguing that it brings forth new scientific challenges; and that, in the context of explainability, this responsibility is changing AI research: ensuring a focus on the synergistic collaboration between humans and AI systems. However, she expressed the need for appropriate validation processes for such systems, which is difficult because it requires the evaluation of human-machine interactions (social-technical interactions).

 

Afterwards, during the discussion phase, a debate sprang forth around a tweet shared in the chat, “…you have to choose between a black box AI surgeon that cannot explain how it works but has a 90% cure rate and a human surgeon with an 80% cure rate that can explain how they work.” Nemitz referred to such hypotheticals as “boogeymen” used to argue against fundamental rights. Meanwhile Muller firmly confronted a commenter who asked whether a human surgeon could even explain themselves, saying that she would certainly hope so, and that these types of hypotheticals are nonsensical.

 

Over 70 attendees came to celebrate Privacy Day with an afternoon packed full of thought-provoking interaction. Thank you to everyone involved at LeADS and the Brussels Privacy Hub for hosting such an event.

WATCH AGAIN THE SoBigData++ and LeADS Awaraness Panel

Recent Perspectives on Dynamic Consent in Research: a Combined Legal and Technical Approach

VIDEO