There are moments in an academic career where research and responsibility converge in unexpected ways. Being invited to serve as Guest Editor for a Special Issue of the IEEE Computational Intelligence Magazine (CIM) is one of those moments. The Special Issue is titled Privacy-Preserving Machine and Deep Learning (PP-MDL), and it sits right at the intersection of what I study every day and what we build at Dhiria.
In this post, I want to share what it means to be a Guest Editor, what the role actually entails, and why this particular Special Issue matters, both for the research community and for anyone building AI systems that handle sensitive data.
What Is a Special Issue?
Academic journals typically publish articles on a rolling basis, covering the broad scope defined by the journal's editorial mission. A Special Issue is different: it is a curated collection of papers focused on a specific, timely topic. The goal is to attract cutting-edge contributions around a theme that the journal's editorial board believes deserves concentrated attention.
For IEEE CIM (a Q1 journal in Artificial Intelligence), a Special Issue signals that the topic is mature enough to warrant a dedicated research spotlight, yet still evolving fast enough to benefit from one.
The topic we chose, Privacy-Preserving Machine and Deep Learning, checks both boxes.
The Role of Guest Editor
A Guest Editor is not simply a reviewer with a fancier title. The role begins well before any manuscript is submitted and extends far beyond the final acceptance decisions.
It starts with defining the vision. Together with my co-editors — Prof. Manuel Roveri (Politecnico di Milano), Prof. Seiichi Ozawa (Kobe University), and Dr. Goichiro Hanaoka (AIST, Japan) — we shaped the scope of the Special Issue: which sub-topics to include, what kind of contributions to encourage, and how to frame the call for papers so that it reaches the right communities. This means thinking carefully about the balance between theoretical advances and practical applications, between well-established techniques and emerging ones.
Then comes community outreach. A Special Issue lives or dies by the quality of the submissions it attracts. This means reaching out to research groups worldwide, presenting the call at conferences, and making sure the topic resonates with both seasoned researchers and newcomers to the field.
The heaviest part of the work is managing the peer review process. Each submitted manuscript must be assigned to qualified reviewers, and those reviews must be critically evaluated. The Guest Editor acts as a filter, not just accepting or rejecting, but guiding authors toward stronger, more complete contributions. It's a delicate balance: rigorous enough to uphold the journal's standards, constructive enough to improve the work rather than merely gatekeep it.
Finally, the Guest Editor contributes to the editorial narrative. The collection of accepted papers should tell a coherent story about the state of the art. This often involves writing an editorial preface that ties the contributions together and highlights emerging trends.
Why Privacy-Preserving Machine and Deep Learning?
Anyone following AI research knows that privacy is no longer a nice-to-have. Regulations like the GDPR and the EU AI Act are tightening the constraints under which AI systems can operate. But even beyond compliance, there is a fundamental engineering challenge: how do you build machine learning models that are both powerful and privacy-respecting?
The Special Issue on PP-MDL tackles this question head-on. The topics of interest span the full stack of privacy-preserving AI, including Differential Privacy and k-Anonymity methods, Homomorphic Encryption for encrypted training and inference, Secure Multiparty Computation, privacy-preserving Federated Learning, hardware accelerators, ethical considerations, and novel applications such as privacy-preserving Large Language Models and genomic data processing.
What excites me about this Special Issue is that it doesn't treat privacy as an afterthought bolted onto existing systems. Instead, it invites the community to rethink model architectures, training procedures, and deployment pipelines from the ground up, with privacy as a first-class design constraint.
What This Means for Dhiria
If you've been following this blog, you'll know that Dhiria was founded precisely on this premise: AI and privacy are not in conflict, they can coexist. We have written about training ML models on encrypted data, about Homomorphic Encryption in Federated Learning, and about the limits of Trusted Execution Environments.
Serving as Guest Editor for this Special Issue reinforces Dhiria's position at the frontier of privacy-preserving AI research. It makes it possible for us to actively shape the direction in which the field moves. The topics covered in the Special Issue map directly onto the technical challenges we face when building privacy-preserving AI products.
More broadly, it's a reminder that in the world of privacy-preserving AI, the gap between academia and industry is thinner than in most other fields. The techniques being researched today are the production systems of tomorrow. Being embedded in both worlds, the editorial board and the startup, gives us a unique vantage point.
The Honors, the Duties
Guest editing a Special Issue for IEEE CIM is, without doubt, an honor. It means that the broader research community recognizes that this topic matters and that our team has the expertise to curate it. It is also a tremendous amount of work: hundreds of emails, dozens of reviews to coordinate, and the constant intellectual effort of staying fair, thorough, and constructive.
But the real reward is seeing the field advance. When I look at the submissions we receive, I see researchers across the world tackling the same problems we face at Dhiria, and often proposing solutions we hadn't considered. That exchange, mediated by the rigorous but collaborative process of peer review, is what pushes science and technology forward.
The manuscript deadline was December 1, 2025, and we are currently deep into the review process. First notifications are expected by March 2026, with final decisions by July 2026. If you're working on privacy-preserving AI and want to stay updated, keep an eye on the Special Issue page.
In the meantime, we'll keep building. At Dhiria, the research doesn't stop at the editorial board. It powers our products, our mission, and our vision for an AI future where privacy is not a trade-off, but a guarantee.






