Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Edge Computing Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Federated Learning
(section)
Page
Discussion
British English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== 5.5.2 Data Poisoning Attacks == Data poisoning attacks target the integrity of FL by manipulating the training data on individual clients before model updates are generated. Unlike model poisoning, which corrupts the gradients or weights directly, data poisoning occurs at the dataset level—allowing adversaries to stealthily influence model behavior through biased or malicious data. This includes techniques such as label flipping, outlier injection, or clean-label attacks that subtly alter data without visible artifacts. Since FL assumes client data remains private and uninspected, such poisoned data can easily propagate harmful patterns into the global model, especially in non-IID edge scenarios.<sup>[2]</sup><sup>[3]</sup> Edge devices are especially vulnerable to this form of attack due to their limited compute and energy resources, which often preclude comprehensive input validation. The highly diverse and fragmented nature of edge data—like medical readings or driving behavior—makes it difficult to establish a clear baseline for anomaly detection. Defense strategies include robust aggregation (e.g., Median, Trimmed Mean), anomaly detection, and Differential Privacy (DP). However, these methods can reduce model accuracy or increase complexity.<sup>[1]</sup><sup>[4]</sup> There is currently no foolproof solution to detect data poisoning without violating privacy principles. As FL continues to be deployed in critical domains, mitigating these attacks while preserving user data locality and system scalability remains an open and urgent research challenge.<sup>[3]</sup><sup>[4]</sup>
Summary:
Please note that all contributions to Edge Computing Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Edge Computing Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)