Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Edge Computing Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Federated Learning
(section)
Page
Discussion
British English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== 5.5.1 Model Poisoning Attacks == Model poisoning attacks are a critical threat in FL, especially in edge computing environments where the infrastructure is distributed and clients may be untrusted or loosely regulated. In this type of attack, malicious clients intentionally craft and submit harmful model updates during the training process to compromise the performance or integrity of the global model. These attacks are typically categorized as either untargeted—aimed at degrading general model accuracy—or targeted (backdoor attacks), where the global model is manipulated to behave incorrectly in specific scenarios while appearing normal in others. For instance, an attacker might train its local model with a backdoor trigger, such as a specific pixel pattern in an image, so that the global model misclassifies inputs containing that pattern, even though it performs well on standard test cases.<sup>[1]</sup><sup>[4]</sup> FL's reliance on aggregation algorithms like Federated Averaging (FedAvg), which simply compute the average of local updates, makes it susceptible to these attacks. Since raw data is never shared, poisoned updates can be hard to detect—especially in non-IID settings where variability is expected. Robust aggregation techniques like Krum, Trimmed Mean, and Bulyan have been proposed to resist such manipulation by filtering or down-weighting outlier contributions. However, these algorithms often introduce computational and communication overheads that are impractical for edge devices with limited power and processing capabilities.<sup>[2]</sup><sup>[4]</sup> Furthermore, adversaries can design subtle attacks that mimic benign statistical patterns, making them indistinguishable from legitimate updates. Emerging research explores anomaly detection based on update similarity and trust scoring, yet these solutions face limitations when applied to large-scale or asynchronous FL deployments. Developing lightweight, real-time, and scalable defenses that remain effective under device heterogeneity and unreliable network conditions remains an unresolved challenge.<sup>[3]</sup><sup>[4]</sup> [[File:FL_IoT_Attack_Detection.png|thumb|center|600px|Federated Learning-based architecture for adversarial sample detection and defense in IoT environments. Adapted from Federated Learning for Internet of Things: A Comprehensive Survey.]]
Summary:
Please note that all contributions to Edge Computing Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Edge Computing Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)