Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Edge Computing Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Federated Learning
(section)
Page
Discussion
British English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
= 5.1 Overview and Motivation = '''Federated Learning (FL)''' is a decentralized approach to machine learning where many edge devices—called clients—work together to train a shared model ''without'' sending their private data to a central server. Each client trains the model locally using its own data and then sends only the model updates (like gradients or weights) to a central server, often called an '''aggregator'''. The server combines these updates to create a new global model, which is then sent back to the clients for the next round of training. This cycle repeats, allowing the model to improve while keeping the raw data on the devices. FL shifts the focus from central data collection to collaborative training, supporting privacy and scalability in machine learning systems.<sup>[1]</sup> The main reason for using FL is to address concerns about data privacy, security, and communication efficiency—especially in edge computing, where huge amounts of data are created across many different, often limited, devices. Centralized learning struggles here due to limited bandwidth, high data transfer costs, and strict privacy regulations like the '''General Data Protection Regulation (GDPR)''' and the '''Health Insurance Portability and Accountability Act (HIPAA)'''. FL helps solve these problems by keeping data on the device, reducing the risk of leaks and the need for constant cloud connectivity. It also lowers communication costs by sharing small model updates instead of full datasets, making it ideal for real-time learning in mobile and edge networks.<sup>[2]</sup> In edge computing, FL is a major shift in how we do machine learning. It supports distributed intelligence even when devices have limited resources, unreliable connections, or very different types of data ('''non-IID'''). Clients can join training sessions at different times, work around network delays, and adjust based on their hardware limitations. This makes FL a flexible option for edge environments with varying battery levels, processing power, and storage. FL can also support personalized models using techniques like '''federated personalization''' or '''clustered aggregation'''. Overall, it provides a strong foundation for building AI systems that are scalable, private, and better suited for the challenges of modern distributed computing.<sup>[1]</sup><sup>[2]</sup><sup>[3]</sup>
Summary:
Please note that all contributions to Edge Computing Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Edge Computing Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)