In Machine Learning, the data for training the model are stored centrally. However, when the data come from different sources and contain sensitive information, we can use federated learning to implement a privacy-preserving distributed machine learning framework. In this case, multiple client devices participate in global model training by sharing only the model updates with the server while keeping the original data local. In this paper, we propose a new approach, called partially-federated learning, that combines machine learning with federated learning. This hybrid architecture can train a unified model across multiple clients, where the individual client can decide whether a sample must remain private or can be shared with the server. This decision is made by a privacy module that can enforce various techniques to protect the privacy of client data. The proposed approach improves the performance compared to classical federated learning.

Partially-federated learning: A new approach to achieving privacy and effectiveness

Lax Gianluca
;
Russo Antonia
2022-01-01

Abstract

In Machine Learning, the data for training the model are stored centrally. However, when the data come from different sources and contain sensitive information, we can use federated learning to implement a privacy-preserving distributed machine learning framework. In this case, multiple client devices participate in global model training by sharing only the model updates with the server while keeping the original data local. In this paper, we propose a new approach, called partially-federated learning, that combines machine learning with federated learning. This hybrid architecture can train a unified model across multiple clients, where the individual client can decide whether a sample must remain private or can be shared with the server. This decision is made by a privacy module that can enforce various techniques to protect the privacy of client data. The proposed approach improves the performance compared to classical federated learning.
2022
Collaborative learning
Distributed databases
k-anonymity
l-diversity
Machine learning
File in questo prodotto:
File Dimensione Formato  
Fisichella_2022_j.ins_partially_editor.pdf

non disponibili

Descrizione: Versione editoriale
Tipologia: Versione Editoriale (PDF)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.47 MB
Formato Adobe PDF
1.47 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Fisichella_2022_j.ins_partially_post.pdf

embargo fino al 20/10/2024

Descrizione: Post-print
Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 1.23 MB
Formato Adobe PDF
1.23 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12318/131730
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 7
social impact