Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Edge Computing Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Federated Learning
(section)
Page
Discussion
British English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
= 5.4 Privacy Mechanisms = Privacy and data confidentiality are central design goals of Federated Learning (FL), particularly in edge computing scenarios where numerous IoT devices (e.g., hospital servers, autonomous vehicles) gather sensitive data. Although FL does not require the raw data to leave each client’s device, model updates can still leak private information or be correlated to individual data points. To address these challenges, various privacy-preserving mechanisms have been proposed in the literature.<sup>[1]</sup><sup>[2]</sup><sup>[3]</sup> == 5.4.1 Differential Privacy (DP) == Differential Privacy (DP) is a formal framework ensuring that the model’s outputs (e.g., parameter updates) do not reveal individual records. In FL, DP often involves injecting calibrated noise into gradients or model weights on each client. This noise is designed so that the global model’s performance remains acceptable, yet attackers cannot reliably infer any single data sample’s presence in the training set. A step-by-step timeline of DP in an FL context can be summarized as follows: # Clients fetch the global model and compute local gradients. # Before transmitting gradients, clients add randomized noise to mask specific data patterns. # The central server aggregates the noisy gradients to produce a new global model. # Clients download the updated global model for further local training. By carefully tuning the “privacy budget” (ε and δ), DP can balance privacy against model utility.<sup>[1]</sup><sup>[4]</sup> == 5.4.2 Secure Aggregation == Secure Aggregation, or SecAgg, is a protocol that encrypts local updates before they are sent to the server, ensuring that only the aggregated result is revealed. A typical SecAgg workflow includes: # Each client randomly splits its model updates into multiple shares. # These shares are exchanged among clients and the server in a way that no single party sees the entirety of any update. # The server only obtains the sum of all client updates, rather than individual parameters. This approach can thwart internal adversaries who might try to reconstruct local data from raw updates.<sup>[2]</sup> SecAgg is crucial for preserving confidentiality, especially in IoT-based FL systems where data privacy regulations (such as GDPR and HIPAA) prohibit raw data exposure. == 5.4.3 Homomorphic Encryption and SMPC == Homomorphic Encryption (HE) supports computations on encrypted data without the need for decryption. In FL, a homomorphically encrypted gradient can be aggregated securely by the server, preventing it from seeing cleartext updates. This approach, however, introduces higher computational overhead, which can be burdensome for resource-limited IoT edge devices.<sup>[3]</sup> Secure Multi-Party Computation (SMPC) is a related set of techniques that enables multiple parties to perform joint computations on secret inputs. In the context of FL, SMPC allows clients to compute sums of model updates without revealing individual updates. Although performance optimizations exist, SMPC remains challenging for large-scale models with millions of parameters.<sup>[1]</sup><sup>[5]</sup> == 5.4.4 IoT-Specific Considerations == In edge computing, IoT devices often capture highly sensitive data (patient records, vehicle sensor logs, etc.). Privacy measures must therefore operate seamlessly on low-power hardware while accommodating intermittent connectivity. For instance, a smart healthcare device storing patient records may use DP-based local training and SecAgg to encrypt updates before uploading. Meanwhile, an autonomous vehicle might adopt HE to guard sensor patterns relevant to real-time traffic analysis. Together, these techniques form a multi-layered privacy defense tailored for distributed, resource-constrained IoT ecosystems.<sup>[4]</sup><sup>[5]</sup> [[File:System-model-for-privacy-preserving-federated-learning.png|thumb|center|500px|System model illustrating privacy-preserving federated learning using homomorphic encryption.<br>Adapted from Privacy-Preserving Federated Learning Using Homomorphic Encryption.]]
Summary:
Please note that all contributions to Edge Computing Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Edge Computing Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)