DigiCon III: Public Use of AI and Fundamental Rights Impact Assessment
On October 19, 2023, Mitisha Gaur had the pleasure of attending DigiCon III, a conference on digital transformation and innovation, in Florence, Italy. She attended as a panelist in the roundtable on the public use of AI and during her talk focused on fundamental rights impact assessment under the EU’s proposed AI Act.
The roundtable was a lively and informative discussion on the challenges and opportunities of using AI in the public sector. We discussed the importance of ensuring that AI systems are used in a way that respects fundamental rights, such as privacy, non-discrimination, and freedom of expression. We also discussed the need for robust impact assessment processes to identify and mitigate the potential risks of AI systems.
One of the key takeaways from the roundtable was that there is no one-size-fits-all approach to fundamental rights impact assessment. The specific risks and impacts of AI systems will vary depending on the specific context in which they are used. However, there are some general principles that can be applied to all impact assessments.
First, it is important to involve a wide range of stakeholders in the impact assessment process. This includes people who will be affected by the AI system, as well as experts in relevant fields such as law, ethics, and technology. Second, it is important to consider all potential risks and impacts of the AI system, both positive and negative. This includes considering the impact on different groups of people, such as minorities and vulnerable groups. Third, the impact assessment should be conducted in a transparent and accountable manner. This means that the findings of the assessment should be made public and that stakeholders should have the opportunity to provide feedback.
The EU’s proposed AI Act is a significant step forward in the regulation of AI. The Act requires that all high-risk AI systems undergo a fundamental rights impact assessment. This is a positive development, as it will help to ensure that AI systems are used in a way that respects fundamental rights.
However, it is important to note that the AI Act is still in the proposal stage. It is important to ensure that the Act is implemented in a way that is effective and proportionate. It is also important to ensure that the Act does not stifle innovation or hinder the development of beneficial AI applications. There is shared optimism across the board about the future of AI in the public sector. AI has the potential to revolutionize the way that public services are delivered. However, it is important to use AI responsibly and ethically. The EU’s proposed AI Act is a step in the right direction, but more needs to be done to ensure that AI is used in a way that benefits all of society.




In the workshop, ESRs Fatma S. Doğan and Soumia El Mestari presented their paper titled: “Techniques to achieve anonymization of health data: When are they sufficient to be considered as legally compliant?” in which they have worked together with Dr Wilhelmina Maria Botes. In the study, they discussed the European Health Data Space(EHDS) and how it relates to using health data while aiming to keep it anonymized for secondary use purposes. They analyzed that the current stage of the EHDS proposal omits important points in terms of the operation of the secondary use framework. Further, they explored that achieving anonymization of various types of health data poses different challenging points in terms of technicality.
She remarks that environmental effects, mining rare earth minerals, and immense human labor which has been used in order to develop AI technologies are often omitted. She resembles generative AI to magic since we don’t know the cost of building it as a result of big tech companies not being transparent. To improve this field, she suggests that we should re-think the training process along with fair working conditions. Also, more research on sustainable generative AI should take place, and openness in research should be increased since we are suffering from a lack of transparency. In a way, she shed light on shadowed parts of developing AI technologies and stated that if we know the true cost of AI technologies then we could be more conscious about what we want and what we don’t want, and our needs. Lastly, she mentioned her book named, Atlas of AI in which she explored deeply what she discussed.
ESR Barbara Lazarotto has contributed to a white paper on the “Use and re-use of urban data in the smart city domain”.
sheets facilitated the conceptualization of these concepts. The majority of visitors expressed interest in AI developments, making the AI Act the central topic of most conversations. High school students were also intrigued by the backgrounds of the ESRs; the diversity of disciplines represented by the three ESRs captured their attention about potential future careers.
ESRs also had the opportunity to observe the scientific fair, where researchers from various fields presented their projects, proving to be inspiring. The connection with CERN labs stood out as a prominent example. The program provided a valuable chance for ESRs to engage with other researchers.
On 21-22nd September 2023, the “
Vantage Point for the Protection of Consumer Data – Protecting or Polluting the Privacy Ecosystem?” in a panel on ‘Beyond Data Protection‘. He elaborated how the regulation of data has increasingly become a regulatory concern for the (market-focussed) European consumer law, despite being anchored in the (fundamental rights-focussed) European data protection framework. Whilst authors have increasingly identified potential complementarities between both fields of law for the protection of consumer data, he argued that they might at times pursue opposite objectives. By drawing parallels with debates that surrounded the uneasy relationship between consumer and environmental policy, Onntje showed how the regulation of consumer data under consumer law potentially not only contributes to the protection but also the pollution of the privacy ecosystem. At the same time, he relied on this analogy to showcase how existing tensions between both policy areas can be overcome.
Barbara presented her paper “The Smart Government Paradox: A Critical Reflection on EU Constitutional and Data Law Landscape in Light of Techno-Solutionism” in a panel on Government data and surveillance. She explored how business to government data sharing practices can give governments superpowers which threatens the rights of citizens. She made an analysis of how the creation of multilayered data access rights in the public sector can give citizens tools to fight against smart government abuses.