Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Edge Computing Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Federated Learning
(section)
Page
Discussion
British English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
= 5.5 Security Threats = While Federated Learning (FL) enhances data privacy by ensuring that raw data remains on edge devices, it introduces significant security vulnerabilities due to its decentralized design and reliance on untrusted participants. In edge computing environments, where clients often operate with limited computational power and over unreliable networks, these threats are particularly pronounced. == 5.5.1 Model Poisoning Attacks == Model poisoning attacks are a critical threat in FL, especially in edge computing environments where the infrastructure is distributed and clients may be untrusted or loosely regulated. In this type of attack, malicious clients intentionally craft and submit harmful model updates during the training process to compromise the performance or integrity of the global model. These attacks are typically categorized as either untargetedâaimed at degrading general model accuracyâor targeted (backdoor attacks), where the global model is manipulated to behave incorrectly in specific scenarios while appearing normal in others. For instance, an attacker might train its local model with a backdoor trigger, such as a specific pixel pattern in an image, so that the global model misclassifies inputs containing that pattern, even though it performs well on standard test cases.<sup>[1]</sup><sup>[4]</sup> FL's reliance on aggregation algorithms like Federated Averaging (FedAvg), which simply compute the average of local updates, makes it susceptible to these attacks. Since raw data is never shared, poisoned updates can be hard to detectâespecially in non-IID settings where variability is expected. Robust aggregation techniques like Krum, Trimmed Mean, and Bulyan have been proposed to resist such manipulation by filtering or down-weighting outlier contributions. However, these algorithms often introduce computational and communication overheads that are impractical for edge devices with limited power and processing capabilities.<sup>[2]</sup><sup>[4]</sup> Furthermore, adversaries can design subtle attacks that mimic benign statistical patterns, making them indistinguishable from legitimate updates. Emerging research explores anomaly detection based on update similarity and trust scoring, yet these solutions face limitations when applied to large-scale or asynchronous FL deployments. Developing lightweight, real-time, and scalable defenses that remain effective under device heterogeneity and unreliable network conditions remains an unresolved challenge.<sup>[3]</sup><sup>[4]</sup> [[File:FL_IoT_Attack_Detection.png|thumb|center|600px|Federated Learning-based architecture for adversarial sample detection and defense in IoT environments. Adapted from Federated Learning for Internet of Things: A Comprehensive Survey.]] == 5.5.2 Data Poisoning Attacks == Data poisoning attacks target the integrity of FL by manipulating the training data on individual clients before model updates are generated. Unlike model poisoning, which corrupts the gradients or weights directly, data poisoning occurs at the dataset levelâallowing adversaries to stealthily influence model behavior through biased or malicious data. This includes techniques such as label flipping, outlier injection, or clean-label attacks that subtly alter data without visible artifacts. Since FL assumes client data remains private and uninspected, such poisoned data can easily propagate harmful patterns into the global model, especially in non-IID edge scenarios.<sup>[2]</sup><sup>[3]</sup> Edge devices are especially vulnerable to this form of attack due to their limited compute and energy resources, which often preclude comprehensive input validation. The highly diverse and fragmented nature of edge dataâlike medical readings or driving behaviorâmakes it difficult to establish a clear baseline for anomaly detection. Defense strategies include robust aggregation (e.g., Median, Trimmed Mean), anomaly detection, and Differential Privacy (DP). However, these methods can reduce model accuracy or increase complexity.<sup>[1]</sup><sup>[4]</sup> There is currently no foolproof solution to detect data poisoning without violating privacy principles. As FL continues to be deployed in critical domains, mitigating these attacks while preserving user data locality and system scalability remains an open and urgent research challenge.<sup>[3]</sup><sup>[4]</sup> == 5.5.3 Inference and Membership Attacks == Inference attacks represent a subtle yet powerful class of threats in FL, where adversaries seek to extract sensitive information from shared model updates rather than raw data. These attacks exploit the iterative nature of FL training. By analyzing updatesâespecially in over-parameterized modelsâattackers can infer properties of the data or even reconstruct inputs. A key example is the membership inference attack, where an adversary determines if a specific data point was used in training. This becomes more effective in edge scenarios, where updates often correlate strongly with individual devices.<sup>[2]</sup><sup>[3]</sup> As model complexity increases, so does the risk of gradient-based information leakage. Small datasets on edge devices amplify this vulnerability. Attackers with access to multiple rounds of updates may perform gradient inversion to reconstruct training inputs. These risks are especially serious in sensitive fields like healthcare. Mitigations include Differential Privacy and Secure Aggregation, but both reduce accuracy or add system overhead.<sup>[4]</sup> Designing FL systems that preserve utility while protecting against inference remains a major ongoing challenge.<sup>[1]</sup><sup>[4]</sup> == 5.5.4 Sybil and Free-Rider Attacks == Sybil attacks occur when a single adversary creates multiple fake clients (Sybil nodes) to manipulate the FL process. These clients may collude to skew aggregation results or overwhelm honest participants. This is especially dangerous in decentralized or large-scale FL environments, where authentication is weak or absent.<sup>[1]</sup> Without strong identity verification, an attacker can inject numerous poisoned updates, degrading model accuracy or blocking convergence entirely. Traditional defenses like IP throttling or user registration may violate privacy principles or be infeasible at the edge. Cryptographic registration, proof-of-work, or client reputation scoring have been explored, but each has trade-offs. Clustering and anomaly detection can identify Sybil patterns, though adversaries may adapt their behavior to avoid detection.<sup>[4]</sup> Free-rider attacks involve clients that participate in training but contribute little or nothingâe.g., sending stale models or dummy gradientsâwhile still downloading and benefiting from the global model. This undermines fairness and wastes resources, especially on networks where honest clients use real bandwidth and energy.<sup>[3]</sup> Mitigation strategies include contribution-aware aggregation and client auditing. == 5.5.5 Malicious Server Attacks == In classical centralized FL, the server acts as coordinatorâreceiving updates and distributing models. However, if compromised, the server becomes a major threat. It can perform inference attacks, drop honest client updates, tamper with model weights, or inject adversarial logic. This poses significant risk in domains like healthcare and autonomous systems.<sup>[1]</sup><sup>[3]</sup> Secure Aggregation protects client updates by encrypting them, ensuring only aggregate values are visible. Homomorphic Encryption allows encrypted computation, while SMPC enables privacy-preserving joint computation. However, all three approaches involve high computational or communication costs.<sup>[4]</sup> Decentralized or hierarchical architectures reduce single-point-of-failure risk, but introduce new challenges around trust coordination and efficiency.<sup>[2]</sup><sup>[4]</sup>
Summary:
Please note that all contributions to Edge Computing Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Edge Computing Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)