Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Edge Computing Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Emerging Research Directions
(section)
Page
Discussion
British English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===7.1.3 Scheduling Strategies and Algorithms=== ====Collaboration Manners==== At a high level, the collaboration manner used in an edge computing architecture will determine the system's strengths and weaknesses. The various collaboration models are among devices (things), edge servers, and cloud infrastructure. Things-edge collaboration allows smart devices (things) to offload computation to nearby edge servers, reducing latency and conserving device energy [3]. This model is commonly used in scenarios where rapid responses are crucial, such as in autonomous vehicles or mobile applications. Things-edge-cloud collaboration extends this by leveraging both edge and cloud resources—tasks are dynamically split or redirected based on resource availability and performance goals, enabling scalability for complex, data-intensive applications like AI and 3D sensing for industrial IoT. Edge-edge and edge-cloud collaboration further enhance system flexibility and efficiency. In edge-edge collaboration, overloaded edge servers can offload tasks to other edge nodes, promoting better load balancing for QoS requirements. Meanwhile, edge-cloud collaboration enables cloud services to offload computation closer to the user. For example, video transcoding taking place on a home WiFi point instead of in the cloud layer [3]. ====Algorithms==== The task/resource scheduling can be split between centralized methods vs. distributed methods. Examples of centralized methods include convex optimization, approximate algorithm, heuristic algorithm, and machine learning. Examples of distributed methods include game theory, matching theory, auction, federated learning, and blockchain. The effectivity of each method can be measured by the following performance indicators: latency, energy consumption, cost, utility, profit, and resource utilization [1]. [[File:Research techniques.png|600px|thumb|center| Diagram of research techniques in task/resource scheduling [1]]] ====Centralized Methods==== Here, we will discuss these techniques in more detail. Convex optimization is widely used and well-studied for its mathematical rigor and ability to support sub-optimal yet efficient solutions, such as through Lyapunov optimization for online decision-making. However, its practicality in real-world deployments is limited by the complexity of solving certain problems in parallel. Simpler methods like approximate algorithms—including Markov Decision Processes and k-means clustering—offer more flexibility and ease of implementation but often suffer from local optima and unreliable performance. Similarly, heuristic algorithms, often based on greedy strategies, provide quick and practical solutions but may also fail to reach the global optimum. Machine learning techniques, especially deep learning, scale effectively with hardware and can model complex non-linear patterns, yet they introduce challenges in explainability and require intensive training [4]. ====Decentralized & Distributed Methods==== There are several techniques in the decentralized and distributed methods side as well. Game theory models users as rational agents seeking equilibrium, providing practical yet possibly sub-optimal solutions. Matching theory is useful for binary offloading scenarios but struggles with more nuanced partial offloading. Auction-based models mirror economic systems, aligning task requests with resource availability. Meanwhile, federated learning addresses privacy and scalability by training models collaboratively across devices without sharing raw data, though it requires careful orchestration. Blockchain, another decentralized approach, ensures data integrity and trust in scheduling decisions but introduces significant latency, making it less suitable for real-time tasks. These decentralized strategies are increasingly critical as edge networks grow more heterogeneous and resource-constrained [4].
Summary:
Please note that all contributions to Edge Computing Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Edge Computing Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)