Deep Learning as-a-service and privacy: risks for data
In recent years, the Deep Learning as-a-Service (DLaaS) paradigm has become widespread, allowing users to access powerful Artificial Intelligence (AI) algorithms easily and transparently through cloud platforms (for example, ChatGPT). This approach has made AI accessible even to those without advanced technical skills or dedicated infrastructure, enabling companies and individuals to leverage sophisticated AI models simply by sending their data to an external provider. However, this convenience comes with a structural risk. Entrusting data to third parties entails giving up direct control over the information, exposing it to potential misuse or breaches. Numerous data breaches and scandals involving personal data management (such as Cambridge Analytica) demonstrate how concrete this issue is. Consequently, the current paradigm raises a fundamental dilemma: balancing the efficiency and scalability of the cloud with the protection of individual privacy, which is at risk of compromise when data becomes the primary driver of innovation.
Current solutions and their limits
To mitigate privacy risks, several solutions have been developed in recent years in an attempt to balance security and functionality. The first strategy is on-premise solutions, which avoid the cloud entirely by keeping data local and preventing it from leaving the owner’s environment. While this ensures a high level of protection, it requires forgoing the advantages offered by DLaaS mentioned earlier. In other words, not all organizations possess the skills or infrastructure required to manage this processing internally.
Another widely used solution is data pseudonymization or anonymization, meaning the removal of information that directly identifies an individual. However, numerous studies have shown that large datasets, even when anonymized, can be cross-referenced with secondary sources to re-identify individuals. Moreover, anonymization tends to reduce the informational quality of the data, making AI models less accurate and reliable.
Each of these approaches represents a step forward, but none fully reconciles security, efficiency and usability. The growing need to protect data throughout its entire lifecycle without sacrificing the potential of DLaaS has pushed research in new directions, opening the door to a radically different paradigm: privacy-preserving AI.
Data protection: the role of the GDPR and the european AI Act
In recent years, the European Union (EU) has shown a strong commitment to privacy protection, placing it at the center of its regulatory framework. The General Data Protection Regulation (GDPR), in force since 2018, marked a major turning point. For the first time, uniform and binding standards were established across all member states regarding the processing, storage, and security of personal data. Organizations are required to collect only strictly necessary information, use it for legitimate purposes, and protect it with appropriate measures throughout its lifecycle. The GDPR also introduced the principle of privacy-by-design, which mandates integrating data protection from the earliest design stages of any technological system, including those based on AI.
In continuity with this vision, the AI Act was approved in 2024 as the first European regulatory framework specifically dedicated to artificial intelligence. The regulation adopts a risk-based approach aimed at ensuring that AI use is transparent, safe, and respectful of fundamental rights. Although it does not directly address privacy-preserving AI systems, the AI Act reinforces the EU’s commitment to solutions that combine innovation and data protection.
Within this regulatory context, research and applications aiming to ensure data confidentiality, not only during collection or storage but also during processing, are gaining importance. This is precisely where the privacy-preserving AI approach emerges, utilizing technologies such as Homomorphic Encryption to keep data encrypted and secure throughout the entire computation process.
Privacy-preserving AI and Homomorphic Encryption: how it works
To address the growing need for security and confidentiality, scientific research has developed a new approach that allows the use of advanced artificial intelligence models without ever exposing data in plaintext form. The underlying idea is simple yet revolutionary: allow models to learn, analyze, and produce results while keeping data encrypted at all times, even during computation. This is made possible by an advanced cryptographic technology known as homomorphic encryption.
Homomorphic encryption allows mathematical operations to be performed directly on encrypted data, producing results that, once decrypted, exactly match those that would be obtained by working on the original data. Simply put, it is as if the data were locked inside a safe; the system can manipulate the contents without ever opening it or seeing what is inside. The result remains encrypted and can only be decrypted by the legitimate owner of the secret key.
This approach ensures complete protection throughout the entire lifecycle of the data. Information remains encrypted during storage, transfer, and even processing. In some applications, the AI model itself can also be encrypted, protecting not only user data but also theintellectual property of the algorithm developers.
A concrete example helps clarify this process. Imagine a hospital that wants to use a cloud-based deep learning model to improve cancer diagnosis from radiographs. In a traditional scenario, the images would be sent in plain form to the service provider, exposing them to potential violation risks. With homomorphic encryption, the hospital encrypts each radiograph before sending it. The server receives only unreadable files but can still perform the necessary analyses, such as identifying suspicious areas, without ever decrypting them. The computation result is returned in encrypted form and can only be read by the hospital.
This approach eliminates the need to trust the cloud provider because data confidentiality does not depend on contractual clauses but is guaranteed by the mathematical properties of the technology itself. It is a zero trustparadigm where security relies on cryptography rather than trust. The combination of deep learning and homomorphic encryption paves the way for AI that truly respects privacy, leveraging the power of the cloud without compromising confidentiality.
Technological challenges and barriers to adoption
Although privacy-preserving AI and homomorphic encryption represent some of the most promising innovations in data security, their large-scale adoption is still in a growth phase. This is a rapidly evolving area of research, where technical challenges are gradually being addressed through continuous scientific progress and increasing industry interest.
One of the main difficulties concerns computational complexity. Operating on encrypted data requires more resources and results in higher latency compared to processing data in plain form. However, recent hardware developments and algorithmic optimizations are significantly reducing these times, making increasingly efficient and practical applications possible. Simultaneously, improvements in open-source libraries and specialized frameworks are lowering the entry barrier for researchers and companies interested in these technologies.
Another key aspect involves aligning technology with regulatory frameworks. For these solutions to be widely adopted, they must be recognized and integrated into existing regulations. Several international initiatives are working in this direction, such as HomomorphicEncryption.org, a consortium of research institutions, universities, and companies aiming to standardize homomorphic encryption, define common guidelines, and promote interoperability. Alongside the efforts of European institutions, these initiatives contribute to a growing openness toward privacy-enhancing technologies, which are increasingly recognized as essential tools for responsible innovation.
Overall, these challenges are tied to technological maturation and are expected to be overcome in the near future. The outlook is extremely positive; the convergence of academic research, industrial investment, and regulatory attention is setting the stage for the widespread adoption of AI systems that are truly secure, transparent, and respectful of privacy.
Dhiria: an italian startup at the forefront of privacy-preserving AI
Dhiria, one of the first organizations worldwide, and the first Italian startup, dedicated to the development of privacy-preserving AI, plays a significant role in the sector. A spin-off of the Politecnico di Milano, Dhiria was created to bring the research results in secure artificial intelligence and advanced cryptography to the market. The company is recognized for the scientific quality of its research, which is published in major international conferences and journals, and for its contribution to the dissemination of privacy-preserving technologies.
Its active participation in the development of this field demonstrates how it is possible to combine the advantages of the cloud with an unprecedented level of data protection, made possible by homomorphic encryption and recent innovations in AI. More than a future vision, this represents a concrete anticipation of the next technological paradigm: artificial intelligence systems designed from the outset to safeguard the privacy of data and users.





