Mitisha Gaur at Digital Legal Talks and LawTomation Days

 Mitisha Gaur, ESR with the LeADS Project, recently presented at which took place on the 15th of September in Utrecth, NL. Mitisha is researching predictive justice applications deployed across courts and government bodies to augment and support decision making practices as well as how such predictive justice systems interact within the legal and regulatory ecosystem. The presentation was titled Predictive Justice and human oversight under the EU’s proposed Artificial Intelligence Act. The presentation was focused on the provisions of the EU’s draft AI Act dealing with human oversight requirements. The presentation delved into an analysis of the human oversight requirements while detailing the material gaps in the human oversight strategy adopted by the draft AI Act. Finally, a specific plan focused on ensuring human oversight across 4 primary stakeholders namely (1) Developers of AI systems; (2) Deployers of AI Systems; (3) Users of the AI System and; (4) the Impact Population on whom the computational results of the AI system are applied was shared.



Subsequently on the 29th of September, Mitisha also presented her work at the LawTomation Days 2023 Conference organised by the IE Law School. The presentation titled Regulating Algorithmic Justice Applications under the EU’s proposed Artificial Intelligence Act: A Critical Analysis took a panoramic view of the provisions of the draft AI Act which are applicable to the high-risk AI systems (which includes predictive justice systems) as classified under the draft AI Act. The presentation was oriented around discussing the various compliance requirements, specifically for predictive justice application and whether they are adequate to allow for predictive justice systems being developed and deployed for perform or augment functions on behalf of public authorities and judicial bodies. The core discussion revolved around the provisions pertaining to risk management systems, fundamental rights impact assessment, transparency and provision of information, human oversight, accuracy-robustness and cybersecurity and finally the obligations of deployers of high-risk AI systems.