Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Edge Computing Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Federated Learning
(section)
Page
Discussion
British English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== 5.6.1 Model Compression Techniques == Model compression techniques are essential for reducing the computational and storage demands of FL models on edge devices. By decreasing the model size, these techniques facilitate more efficient local training and minimize the communication overhead during model updates. Common methods include: * '''Pruning''': This technique involves removing less significant weights or neurons from the neural network, resulting in a sparser model that requires less computation and storage. For instance, a study proposed a framework where the global model is pruned on a powerful server before being distributed to clients, effectively reducing the computational load on resource-constrained edge devices.<sup>[1]</sup> * '''Quantization''': This method reduces the precision of the model's weights, such as converting 32-bit floating-point numbers to 8-bit integers, thereby decreasing the model size and accelerating computations. However, careful implementation is necessary to balance the trade-off between model size reduction and potential accuracy loss. * '''Knowledge Distillation''': In this approach, a large, complex model (teacher) is used to train a smaller, simpler model (student) by transferring knowledge, allowing the student model to achieve comparable performance with reduced resource requirements. This technique has been effectively applied in FL to accommodate the constraints of edge devices.<sup>[2]</sup>
Summary:
Please note that all contributions to Edge Computing Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Edge Computing Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)