This thesis investigates the complex relationship between technological innovation and the protection of privacy in the age of artificial intelligence. The central question is whether innovation and privacy should be seen as conflicting goals or as two dimensions that must be balanced in order to build a sustainable digital society. Starting from a historical reconstruction of the right to privacy and its evolution within Italian and European constitutional systems, the research examines the current regulatory framework that governs AI, focusing on the GDPR, the AI Act and major ethical guidelines. The core of the thesis analyzes how AI (through biometric surveillance, inferential profiling, opaque decision-making systems, and large-scale data processing) creates systemic risks that challenge traditional legal categories and the effectiveness of individual rights. Three Italian case studies (the temporary blocking of ChatGPT, the Como facial recognition project, and AI in healthcare) illustrate the difficulties authorities face in enforcing existing regulations in contexts marked by high technical complexity and weak governance structures. The thesis also offers a comparative analysis between the European and Chinese models of AI governance, highlighting how cultural and political differences shape different understandings of privacy, individual rights, and technological development. Studying other approaches do not determine which model is superior, rather, they can provide valuable insights, allowing us to learn from best practices without compromising the European fundamental principles. The research concludes that innovation and privacy are not opposing forces but interdependent values: privacy is not an obstacle to innovation but an essential condition for ensuring trust, fairness and accountability. Therefore, a sustainable balance requires AI systems designed to incorporate legal and ethical constraints from the outset. Finally, the thesis offers recommendations for creating a digital environment that is both democratic and sustainable over time: stronger governance, transparency, mandatory audits of high-impact AI systems and independent oversight.

Artificial Intelligence and the Right to Privacy: Balancing Innovation and Fundamental Rights in European AI Governance - A Comparative Perspective on China

BIANCHI, SARA
2024/2025

Abstract

This thesis investigates the complex relationship between technological innovation and the protection of privacy in the age of artificial intelligence. The central question is whether innovation and privacy should be seen as conflicting goals or as two dimensions that must be balanced in order to build a sustainable digital society. Starting from a historical reconstruction of the right to privacy and its evolution within Italian and European constitutional systems, the research examines the current regulatory framework that governs AI, focusing on the GDPR, the AI Act and major ethical guidelines. The core of the thesis analyzes how AI (through biometric surveillance, inferential profiling, opaque decision-making systems, and large-scale data processing) creates systemic risks that challenge traditional legal categories and the effectiveness of individual rights. Three Italian case studies (the temporary blocking of ChatGPT, the Como facial recognition project, and AI in healthcare) illustrate the difficulties authorities face in enforcing existing regulations in contexts marked by high technical complexity and weak governance structures. The thesis also offers a comparative analysis between the European and Chinese models of AI governance, highlighting how cultural and political differences shape different understandings of privacy, individual rights, and technological development. Studying other approaches do not determine which model is superior, rather, they can provide valuable insights, allowing us to learn from best practices without compromising the European fundamental principles. The research concludes that innovation and privacy are not opposing forces but interdependent values: privacy is not an obstacle to innovation but an essential condition for ensuring trust, fairness and accountability. Therefore, a sustainable balance requires AI systems designed to incorporate legal and ethical constraints from the outset. Finally, the thesis offers recommendations for creating a digital environment that is both democratic and sustainable over time: stronger governance, transparency, mandatory audits of high-impact AI systems and independent oversight.
File in questo prodotto:
File Dimensione Formato  
Sara Bianchi.pdf

accesso aperto

Dimensione 1.5 MB
Formato Adobe PDF
1.5 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14251/4864