Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Edge Computing Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Emerging Research Directions
(section)
Page
Discussion
British English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== 7.1 Task and Resource Scheduling == ===7.1.1 Introduction=== Task and resource scheduling in edge computing aims to tackle issues of latency reduction, energy efficiency, and resource optimization by intelligently coordinating the architecture and configuration of the edge computing system. According to [1], the factors can be summarized into the following list: *Resources: hardware and software capabilities that provide communication, storage/caching, and computing *Tasks: high-level applications of the systems, such as safety, health monitoring, security, etc. *Participants: the computational components that can collaborate with one another, i.e., users/”things”, edge, and cloud *Objectives: low-level applications of the systems, e.g., lower latency for traffic safety, reduce energy consumption for device longevity, etc. *Actions: methods to achieve the objectives, e.g., computation offloading, resource allocation, resource provisioning *Methodology: strategies to best tackle the actions mentioned above, categorized as centralized and decentralized Aspects of these factors will be discussed below. [[File:Edge arch.png|600px|thumb|center| Architecture and examples of cloud vs edge vs thing/user [1]]] ===7.1.2 Core Challenges in Scheduling=== ====Dynamic, Real-time Environments==== The main challenges in task and resource scheduling are primarily the heterogeneous and constrained edge resources that these types of systems work with, the dynamic workloads and real-time requirements that they are deployed in, and privacy and security concerns. On the first point, not only are the devices heterogenous and geographically distributed, the data volume and data attributes are also heterogenous. While challenging, resource/task scheduling can reduce the difficult effects of this aspect of edge computing. In terms of the dynamic workloads and real-time requirements, the main complication stems from user mobility with their devices. Smartphones, connected vehicles (CVs), and other mobile devices are often moving through the world connecting to different nodes over various protocols. Per [2], strategies that determine offloading and cache decisions can be incorrect very quickly after deciding, or its users may move out of range, or into range of a separate setup. ====Security and Privacy==== For security and privacy, the main challenges are categorized as such: system-level, service-level, and data-level. System-level discusses the overall reliability of the edge system, whether by intrusion or malfunction. Service-level discusses user authentication/authorization and validation of offload nodes. Lastly, data-level discusses the trustworthiness and protection of the data as it passes between IoT devices and edge nodes, as the data can contain sensitive information. ====Resource Allocation==== A large portion of research on this topic is also done on resource allocation and computation offloading. Resource allocation focuses on efficiently distributing computing, communication, and storage resources to support offloaded tasks. Some studies consider single-resource allocation (e.g., just bandwidth or CPU cycles), while many optimize joint allocation of multiple resources (e.g., computing and communication). More comprehensive approaches also include caching strategies to reduce latency. Allocation decisions aim to balance energy use, service quality, and operational cost, often using advanced techniques like optimization algorithms or machine learning to dynamically adapt to changing workloads [2]. Computation offloading in edge computing determines whether and how much of a task should be processed locally or offloaded to another node (edge or cloud). Offloading can occur as both vertical- or horizontal-offloading – that is, device-to-edge, edge-to-cloud, cloud-to-edge, edge-to-edge, or even device-to-device – and helps optimize factors like latency, energy consumption, and system cost. Offloading can be binary (all or nothing) or partial (a portion of the task is offloaded), and decisions depend on resource availability, task size, and QoS requirements [3]. ===7.1.3 Scheduling Strategies and Algorithms=== ====Collaboration Manners==== At a high level, the collaboration manner used in an edge computing architecture will determine the system's strengths and weaknesses. The various collaboration models are among devices (things), edge servers, and cloud infrastructure. Things-edge collaboration allows smart devices (things) to offload computation to nearby edge servers, reducing latency and conserving device energy [3]. This model is commonly used in scenarios where rapid responses are crucial, such as in autonomous vehicles or mobile applications. Things-edge-cloud collaboration extends this by leveraging both edge and cloud resources—tasks are dynamically split or redirected based on resource availability and performance goals, enabling scalability for complex, data-intensive applications like AI and 3D sensing for industrial IoT. Edge-edge and edge-cloud collaboration further enhance system flexibility and efficiency. In edge-edge collaboration, overloaded edge servers can offload tasks to other edge nodes, promoting better load balancing for QoS requirements. Meanwhile, edge-cloud collaboration enables cloud services to offload computation closer to the user. For example, video transcoding taking place on a home WiFi point instead of in the cloud layer [3]. ====Algorithms==== The task/resource scheduling can be split between centralized methods vs. distributed methods. Examples of centralized methods include convex optimization, approximate algorithm, heuristic algorithm, and machine learning. Examples of distributed methods include game theory, matching theory, auction, federated learning, and blockchain. The effectivity of each method can be measured by the following performance indicators: latency, energy consumption, cost, utility, profit, and resource utilization [1]. [[File:Research techniques.png|600px|thumb|center| Diagram of research techniques in task/resource scheduling [1]]] ====Centralized Methods==== Here, we will discuss these techniques in more detail. Convex optimization is widely used and well-studied for its mathematical rigor and ability to support sub-optimal yet efficient solutions, such as through Lyapunov optimization for online decision-making. However, its practicality in real-world deployments is limited by the complexity of solving certain problems in parallel. Simpler methods like approximate algorithms—including Markov Decision Processes and k-means clustering—offer more flexibility and ease of implementation but often suffer from local optima and unreliable performance. Similarly, heuristic algorithms, often based on greedy strategies, provide quick and practical solutions but may also fail to reach the global optimum. Machine learning techniques, especially deep learning, scale effectively with hardware and can model complex non-linear patterns, yet they introduce challenges in explainability and require intensive training [4]. ====Decentralized & Distributed Methods==== There are several techniques in the decentralized and distributed methods side as well. Game theory models users as rational agents seeking equilibrium, providing practical yet possibly sub-optimal solutions. Matching theory is useful for binary offloading scenarios but struggles with more nuanced partial offloading. Auction-based models mirror economic systems, aligning task requests with resource availability. Meanwhile, federated learning addresses privacy and scalability by training models collaboratively across devices without sharing raw data, though it requires careful orchestration. Blockchain, another decentralized approach, ensures data integrity and trust in scheduling decisions but introduces significant latency, making it less suitable for real-time tasks. These decentralized strategies are increasingly critical as edge networks grow more heterogeneous and resource-constrained [4]. ===7.1.5 Future Trends and Research Directions=== ====Computation migration==== Much research is being done in this emerging field, but a significant push is being done within this topic in the fields of computation migration and task partitioning. That is, finding cooperation between edge devices and not only sharing tasks, but sharing parts within each task [5]. Dividing the task, determining the nature of the subtasks, and finding optimal offloading ratios between edge nodes is a key area of study. ====Green Computing==== Another ongoing research direction is in green computing. That is, using renewable resources for electricity to power edge devices. This is an important research direction as computing usage grows in smart cities and more technically advanced societies, but it adds an additional consideration in the field of task and resource scheduling, since energy is consumed in collaboration models for both processing and transmission. Not only that, but the energy provided from renewables can be unstable, requiring dynamic setups to maintain uptime [6]. [[File:Research issues.png|600px|thumb|center| Diagram of categorized research issues [1]]] ===7.1.6 Conclusion=== Here we have discussed task and resource as an exciting path forward to unlocking the full potential of edge computing. As edge systems continue to evolve and scale, the ability to intelligently manage computation offloading, resource allocation, and collaboration across heterogeneous and distributed nodes becomes increasingly important. These scheduling mechanisms can not only reduce latency and energy consumption but also enable scalability, resilience, and responsiveness in real-world applications, such as autonomous vehicles and industrial IoT networks. This emerging work will allow edge computing systems to operate more efficiently, securely, and adaptively in dynamic environments. ===References=== [1] Luo, Quyuan, et al. "Resource scheduling in edge computing: A survey." IEEE communications surveys & tutorials 23.4 (2021): 2131-2165. [2] Lin, Li, et al. "Computation offloading toward edge computing." Proceedings of the IEEE 107.8 (2019): 1584-1607. [3] Islam, Akhirul, et al. "A survey on task offloading in multi-access edge computing." Journal of Systems Architecture 118 (2021): 102225. [4] Khan, Wazir Zada, et al. "Edge computing: A survey." Future Generation Computer Systems 97 (2019): 219-235. [5] K. Xiao, Z. Gao, C. Yao, Q. Wang, Z. Mo, and Y. Yang, “Task offloading and resources allocation based on fairness in edge computing,” in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC), 2019, pp. 1–6. [6] Jiang, Congfeng, et al. "Energy aware edge computing: A survey." Computer Communications 151 (2020): 556-580.
Summary:
Please note that all contributions to Edge Computing Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Edge Computing Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)