Jump to content

Edge Computing Products and Frameworks

From Edge Computing Wiki
Revision as of 00:12, 25 April 2025 by Piercen (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Edge Computing Products and Frameworks[edit]

This chapter covers various products and frameworks in edge computing applications, including closed/paid services such as AWS, free and open source frameworks, and serverless products.

3.1 Industry Products: AWS as an Example[edit]

From the previous chapters, it has probably become pretty evident that edge computing will play a key role in applications that are:

  • Latency critical
  • Require massive amounts of data that need to be transferred, which is not cost-effective
  • Where Internet access is not reliable or not possible

Edge computing has already proliferated in several industries, and companies like Amazon (AWS), Microsoft (Azure), Google (Distributed Cloud Edge)], and Nvidia (EGX Edge Computing Platform) have already implemented edge solutions to enable diverse industries to leverage this form of computing for their operational needs. Since this chapter could be an entire book of its own if we go into the details of how each of their respective edge offerings work, for the purposes of brevity, we will focus on Amazon Web Service's edge technology offerings. The rest of this section will describe the edge products offered by Amazon, and present an example of how this technology is being leveraged by one of their customers.

Snowball Family[edit]

This product line provides customers with the ability to store and move massive amounts of data (up to exabytes) in environments where Internet connectivity is limited, or may be too expensive to use Internet bandwidth, or the Internet bandwidth is too slow. This service also allows customers to move their data to the cloud from physical data centers. In addition, it allows customers to perform compute on these devices (like a mobile data center). There are three main device types:

  • Snowcone: A small and rugged edge computing device for environments with limited power and space.
  • Snowball Edge: A larger Snowcone for transferring up to 80 terabytes, and can come equipped with GPUs for ML and video analysis.
  • Snowmobile: A shipping container-sized device for extremely large-scale data transfers (100 petabytes at once), typically used when moving entire data centers to AWS.
The AWS Snowball Family (https://www.geeksforgeeks.org/aws-snow-family/)

Novetta, an advanced analytics company for government entities such as defense and intelligence, etc., uses Snowball Edge with Amazon's EC2 and onboard storage to aid in disaster response efforts in the field. They utilize it to post-process video surveillance feeds and track the location of their assets that are critical to disaster response efforts. It helps them operate in an environment where there is poor Internet connection. Since the government sets a high bar for security, it is one of the few products that is certified for use by the government due to its rugged design. They are able to deploy ML models trained in Amazon's SageMaker in the Snowball Edge for no-lag object detection.

Outpost Family[edit]

This product allows their customers to extend AWS functionality to on-premise servers, hence providing a hybrid cloud architecture. This is useful in applications where a customer would want to use the AWS infrastructure, but not on the cloud either due to data privacy or latency reasons, or would like to mainly run services on local servers and scale them to the cloud when resource utilization increases.

AWS Outposts servers deliver compute and networking services for locations with space and capacity constraints. (https://aws.amazon.com/outposts/servers/)

Inmarsat, a mobile satellite communications provider, transitioned their IT operations to the AWS stack. However, during the transition, they found that some of their applications do not conform to the cloud-native infrastructure. For example, some workloads were facing 20-30ms latency, which was causing downstream performance issues. For them, Outposts was a perfect solution since they could deploy them on-premise where their data centers were located, which provided them with low-latency data access. This also gave them the additional benefit of managing their outpost systems with the same management interface used for AWS cloud, and also providing them with the ability to failover from an edge location to the cloud for high-availability applications.

CloudFront[edit]

This service is a global content delivery network (CDN) that enables the delivery of data, videos, applications, and other assets with low latency and high transfer speeds. This is done by distributing the content to a network of edge locations around the world, so that the data is hosted closer to the users. It also integrates seamlessly with other AWS services such as S3, EC2, and Lambda. The customers also use its detailed metrics and logs via CloudWatch, which allows them to run analytics and generate detailed reports.

AWS CloudFront global network (https://aws.amazon.com/blogs/aws/200-amazon-cloudfront-points-of-presence-price-reduction/)

Kaltura, a video experience provider which has a video platform, video player, and other solutions, utilizes Amazon's CloudFront to serve video content to its users from CDNs (edge servers) that are closest to the end user. In addition, they use data analytics to develop benchmarks and best practices. They also use the AWS IoT Services platform to perform software updates on the set-top boxes, send push notifications, and schedule jobs on the devices.

AWS IoT[edit]

This platform allows users to connect multiple devices to the cloud to control and manage them, along with aggregating the analytics for these devices. AWS Greengrass extends AWS services to edge devices such that they can still locally process the data being gathered using AWS Lambda functions, and use the cloud functionality for management, storage, and analytics. This allows you to process data locally, reducing bandwidth usage, and works well in low-latency applications. It allows the devices to communicate with each other securely, even when the devices are not connected to the Internet, and syncs the data with the cloud when Internet connection is restored.

AWS IoT Greengrass Architecture (https://aws.amazon.com/blogs/architecture/field-notes-connecting-industrial-assets-and-machines-to-the-aws-cloud/)

Novetta, the company that was also using Snowball Edge, also uses AWS IoT Greengrass by embedding them into their own sensor suite. This way, they can leverage the strengths of Greengrass, which is to locally manage the sensors, enable messaging, and also deploy their trained ML models on the sensors. Since the AWS Lambda functions are serverless, they can be instantiated whenever inference needs to be run on these models, which can be triggered by local events or messages from the cloud, enabling real-time responses. Additional Services The services listed above are a very small subset of all the edge services provided by Amazon Web Services. Some of the ones not explicitly mentioned above are:

  • AWS Wavelength: This is designed to deliver applications for mobile devices that require ultra-low latency. This is delivered using 5G, and allows telecommunication providers to leverage the compute in the Wavelength devices (say for AR/VR, autonomous driving, live media streaming, etc.).
  • AWS RoboMaker: This allows developers to build, test, and simulate robotic applications on the cloud to help accelerate robotics development. It integrates with Amazon SageMaker, AWS IoT, and AWS Lambda for advanced ML capabilities, cloud storage, and real-time data processing. You can virtually test robotic applications and deploy them on physical devices at scale. This comes with fleet management capabilities, and analytics using CloudWatch.

The main takeaway from this section would be to realize that edge solutions enable users to blur the boundaries between user devices and the cloud by extending the cloud to edge devices. Furthermore, developers prefer to have the flexibility to work in different layers (cloud, edge, or end device) without the hassle of figuring out how to integrate the disparate devices. The AWS solutions have a very strongly integrated ecosystem of solutions which build on top of each other and enable developers to extend AWS functionality to external end devices, which enables developers to successfully tailor AWS Edge for their industry and use cases.

3.2 Open Source Frameworks[edit]

Open source software provides transparent, publicly-available code with flexible licensing that allows anyone to use, modify, or build upon it. While quality varies across open source projects, the best ones offer strong security, reliability, efficiency, and ready-to-use tools for developers. For these reasons open source is a great option for reducing costs associated with spinning up edge computing applications [13].

In edge computing — where data processing happens near where data is created rather than in centralized cloud data centers — open source plays a vital role. The distributed nature of edge computing works well with open source's collaborative development model. Open source frameworks provide standardized yet adaptable platforms that work across different hardware environments, from small microcontrollers to powerful edge servers. This flexibility is crucial in edge computing where available resources vary widely and proprietary solutions might impose unwanted limitations.

Open source in edge computing includes both software implementations and standards that operate at different levels. For example, EdgeX focuses on practical software implementation while Akraino defines architectural blueprints for edge infrastructure. This two-pronged approach addresses the needs of diverse edge environments that require both standardized interfaces and flexible implementations.

Some of the main advantages offered by open source projects specific to edge computing include:

  • Transparency, Security and Reliability: Community oversight helps find and fix security vulnerabilities faster than in closed-source alternatives. However, this transparency can also reveal potential weaknesses highlighting why good architectural design matters more than "security through obscurity."
  • Cost and Time Efficiencies: Developers can focus on writing application code instead of building infrastructure components from scratch. This approach saves considerable time and money by allowing quick configuration of existing components and eliminating licensing fees.
  • Community Collaboration: The collaborative nature of open source creates a powerful problem-solving ecosystem where researchers, hobbyists, and companies all contribute to improving the code, often producing innovative solutions that might not emerge in more uniform development environments.
  • Interoperability: Edge environments typically contain a mix of different computing resources Open source projects excel by supporting many device types and allowing developers to add support for new hardware, continuously expanding compatibility.
  • Longevity: Open source reduces the risk of being locked into a single vendor's solution. Even if projects are abandoned by their original developers, the code remains available for continued use and potential revival providing confidence for long-term deployments in edge environments.
  • Accelerated Evolution: The open source model's emphasis on contribution helps speed up development in the rapidly changing field of edge computing, allowing practitioners to collectively address new challanges as they emerge.

However, there are also some drawbacks that must be considered as well:

  • Legal and Licensing Complexities: Organizations may struggle to maintain compliance with licensing requirements and risk accidentally publishing proprietary code when integrating with open source components.
  • Limited Driver Support: Many edge devices require proprietary drivers that open source projects cannot include due to licensing restrictions, potentially forcing companies toward proprietary solutions despite preferences otherwize.
  • Maintenance Variability: Support quality varies dramatically between projects, with some receiving few updates. Unlike commercial products, support isn't guaranteed unless paid services are arranged, and project prioritys may not align with organizational needs.

Open source offers significant advantages for edge computing but isn't always the best choice. Decision-makers must carefully evaluate options against their specific use cases, weighing benefits and drawbacks. This requires understanding available options, as open source projects vary widely in design philosophy and quality. For anyone implementing edge computing applications, knowledge of major frameworks and standards is esential.

By carefully evaluating these factors, organizations can select frameworks that balance standardization and implementation support for their specific edge computing needs. There is no one-size-fits-all solution — organizations should thoroughly define their requirements before evaluating frameworks.

This section will focus on three main frameworks: Akraino, KubeEdge, and EdgeX Foundry, but there are many open source projects, including:

  • Eclipse ioFog: Offers lightweight edge computing with strong container orchestration for resource-constrained environments. Well-suited for distributed edge deployments where central management must coordinate numerous nodes across different locations [1].
  • Apache NiFi: Though not exclusively for edge computing, provides powerful tools for routing, transforming, and processing data streams at the edge. It's visual interface enables rapid development of data pipelines by developers with varying expertise levels [2].
  • Microsoft Azure IoT Edge: A hybrid approach with open-source edge runtime components and proprietary cloud management [3].
  • OpenYurt: Extends Kubernetes to edge scenarios with focus on managing nodes with unstable network connections [4].
  • Eclipse Kura: Java-based framework for IoT gateways with access to low-level hardware interfaces [5].
  • Baetyl (OpenEdge): Offers seperate frameworks for cloud and edge components with modular implementation [6].
  • StarlingX: Container-based infrastructure optimized for edge deployments addressing unique requirements like fault management and high availability [7].

EdgeX Foundry[edit]

EdgeX Foundry is a comprehensive implementation framework maintained by the Linux Foundation [8]. Originally developed as Dell's "Project Fuse" in 2015 for IoT gateway computation, it became an open source project in 2017, establishing itself as an industrial-grade edge computing solution comparable to Cloud Foundry. It is considered a powerful industry tool for handling heterogeneous edge computing environments [12].

EdgeX uses a microservices architecture with four service layers and two system services, focusing on interoperability, flexibility, functionality, robustness, performance, and security. This design enables operation across diverse hardware while maintaining deployment adaptibility.

The four service layers include:

  • Core Services: The foundation containing device information, data flows, and configuration. Provides data storage, command capabilities, device metadata, and registry services. Acts as the central nervous system connecting edge devices to IT systems.
  • Supporting Services: Handles analytics, scheduling, and data cleanup. Includes rule engines for edge-based actions, operation schedulers, and notification systems, enhancing local data processing.
  • Application Services: Manages data extraction, transformation, and transmission to external services like cloud providers. Enables event-driven processing for operations like encoding and compression.
  • Device Services: Interfaces with physical devices through their native protocols converting device-specific data to standardized EdgeX formats. Supports MQTT, BACnet, Modbus, and other protocols.

Two system services complement these layers:

  • Security: Implements secure storage for sensitive information and includes a reverse proxy to restrict access to REST resources.
  • System Management: Provides centralized control for operations like starting/stopping services monitoring health, and collecting metrics.

EdgeX has achieved high implementation maturity with regular releases and patches. It's modular design allows component customization, driving adoption across industries, particularly industrial IoT applications requiring device interoperability.

EdgeX architecture (https://nexus.edgexfoundry.org/content/sites/docs/staging/california/docs/_build/html/Ch-Intro.html)

EdgeX addresses resource constraints through flexible deployment models, running as Docker containers with various orchestration methods. The framework includes virtual devices for testing without physical hardware. Its loosely coupled services organized in layers can run on any hardware/OS, supporting both x86 and ARM processors, though this comprehensive architecture requires more resources than lighter frameworks and presents a steeper learning curve for new developers.

Efficiency is enhanced through intelligent command routing and support for data export to cloud environments via configurable exporters, balancing local processing with cloud integration. This approach enables edge-based decision making without cloud connectivity, though the microservices approach may introduce some delays compared to more tightly integrated solutions.

Developers primarily interact with Application ("north side") and Device ("south side") service layers, using specialized SDKs that handle interconnection details, allowing focus on application code. While this simplifies development, optimal deployment typically requires containerization expertise, and configuration may require significant optimization effort.

EdgeX Foundry works well in diverse scenarios including industrial IoT (factory equipment monitoring across protocols), building automation (unified management of systems), retail environments (processing point-of-sale data), energy management (usage monitoring and optimization), smart cities (local processing of urban infrastructure data, and medical device integration (standardizing healthcare device data).

The framework offers significant advantages through its vendor neutrality (community-maintained to avoid lock-in), interoperability (bridging protocols, hardware platforms, and cloud systems), modularity (independent component customization), and deployment flexibility (supporting containers, pods, or binaries). An active community ensures ongoing development and regular releases, while ready-to-use components support customization options. The system maintains operation during network outages with data buffering, though documentation may not cover all integration scenarios.

KubeEdge[edit]

KubeEdge is an open-source edge computing platform that extends Kubernetes containerized application orchestration to edge nodes [9]. Originally created by Huawei and donated to the Cloud Native Computing Foundation (CNCF) in 2019, it has gained significant traction among organizations already using Kubernetes. It works well for smart manufacturing, smart city applications, and edge machine learning inference.

KubeEdge's architecture consists of two primary components:

  1. Cloud Part: Includes KubeEdge CloudCore and the Kubernetes API server, managing node and pod metadata, making deployment decisions, and publishing them to edge nodes. It provides unified management through standard Kubernetes interfaces.
  2. Edge Part: Consists of EdgeCore running directly on edge devices, managing containerized applications, synchronizing with the cloud, and operating autonomously during disconnections. It handles computing workloads, network operations, device twin management, and message routing.
KubeEdge architecture (https://kubeedge.io/docs/)

KubeEdge maintains compatibility with Kubernetes primitives while introducing edge-specific optimizations. It features a lightweight footprint optimized for edge devices, which uses significantly less resources than standard Kubernetes nodes. The framework provides comprehensive API and controller for managing IoT devices connected to edge nodes, making it valuable for deployments with numerous different devices. Security is implemented through certificate-based authentication between components and encryption for data in transit. KubeEdge processes data locally and transmits only relevant information to the cloud in order to improve response times for time-sensitive operations.

Despite its advantages, KubeEdge comes with several limitations to consider. It requires knowledge of Kubernetes concepts, which presents a learning curve for teams without prior experience with container orchestration. The framework also needs an existing Kubernetes control plane, which adds infrastructure requirements. While more lightweight than standard Kubernetes, KubeEdge demands more resources than some alternative edge frameworks potentially making it unsuitable for highly constrained devices. Its comprehensive feature set may be overly complex for simple edge deployments where a more streamlined solution would suffice. Additionally, as a relatively young project compared to some alternatives, its ecosystem and tooling continue to evolve, which may impact long-term stability and support options?

Open Source Standards[edit]

These standards define system and architectural-level details as well as more specific software/hardware implementation details. One such standard is Multi-Access Edge Computing (MEC), a network architecture that positions computing resources closer to end users, typically within or adjacent to cellular base stations and access points, primarily to reduce latency through localized processing. MEC is one of several popular edge computing paradigms, along with Fog, Cloudlet, and Mobile Cloud Computing (MCC), though MEC has gained popularity recently as the rising star with recent advancements toward 6G [14], thus the focus of this section will be on MEC. The open source standards for MEC cross the gamut of hardware and software integration and involves many different parties: telecom companies, maintainers of wireless communication standards, and software companies.

The European Telecommunications Standards Institute (ETSI) established MEC in 2014 to support emerging edge technology development [11]. Frameworks like KubeEdge and EdgeX Foundry operate within the MEC paradigm but focus on application deployment rather than modifying underlying network architecture -- they leverage existing infrastructure without fundamentally altering it.

Several open source initiatives support MEC implementations:

  • Open Network Automation Platform (ONAP): A Linux Foundation platform for orchestrating network functions, including edge computing resources, enabling telecom providers to automate service management.
  • O-RAN Alliance: Promotes openness in radio access networks, supporting edge computing through open interfaces and virtualized network elements.
  • OpenAirInterface (OAI): Provides standardized implementation of 4G/5G networks, enabling experimental deployments on commodity hardware.

The table shows some comparisons between different edge computing paradigms. Note that these paradigms themselves are not open source standards, but there are specific implementations of each that are open source. Among them, MEC and Fog have many open source implementations. For example, fog computing has Eclipse ioFog and MEC has Akraino Edge Stack implementations.


Table I: High Level Comparison of Edge Computing Paradigms, Porambage et. al. [11]
MEC Fog computing Cloudlet MCC
Initial promotion ETSI (2014) Cisco (2011) Carnegie Mellon Uni. (2013) Aepona (2010)
Objective Bring cloud computing capabilities closer to User Equipment (UE)
Infrastructure owners Telecom operator Private entities / individuals
Node location Radio network controller or macro base station Any strategic location between end user device and cloud
SW architecture Mobile orchestrator based Fog abstraction layer based Cloudlet agent based Service oriented
Service accessibility Direct access from the closest UE Via Internet connection
Latency and jitter Low High
Context awareness High Medium Low High
Storage capacity and computation power Limited High
Relevance to IoT High Low


The advancement of open source MEC solutions promises to accelerate innovation by broadening developer contributions to telecommunications edge computing, potentially reducing costs while increasing flexibility and adoption.

Akraino[edit]

Akraino Edge Stack takes a different approach to open source edge computing by focusing on architectural blueprints rather than specific software implementations [10]. Launched in 2018 under the Linux Foundation with AT&T's initial contribution, it quickly attracted industry partners including Nokia, Intel, Arm, and Ericsson.

The core concept in Akraino is the "blueprint" -- a validated configuration of hardware and software components designed for specific edge computing use cases. Blueprints include hardware specifications, software components, and deployment instructions tailored to particular scenarios. This approach represents a middle ground between rigid standardization and custom implementations, providing guidance without mandating vendor-specific solutions.

Akraino's development process progresses from proposal through validation. New blueprints begin as proposals outlining use cases and requirements, advance to development where components are integrated, and undergo extensive testing before release to verify functionality, performance, and security. This rigorous validation ensures blueprints represent production-ready architectural patterns.

While designed to support any access methodology, Akraino is primarily used in telecommunications (4G LTE and 5G) with placement of edge devices based on specific requirements.

Akraimo attempts to minimize total cost of operation (TCO) by optimizing the placement of edges devices within the network but also with strong consideration for performance requirements.

Akraino offers distinct advantages through its architectural focus. Validated configurations reduce implementation risk and rigorous testing ensures production readiness. However, limitations exist compared to other frameworks: blueprints require significant work to become functional systems and substantial technical expertise.


3.3 Serverless at the Edge[edit]

Edge computing helps process data closer to where it was created, which then helps to reduce delays, lower costs, and improve privacy. Serverless computing is also becoming more popular as it lets developers run small pieces of code without managing servers, reducing complexity. Combining these two concepts is powerful, especially for things like smart devices, real-time applications, and AI.

Before discussing these specific examples, let's first cover the basic concept of serverless edge computing and contrast it to cloud computing (although in reality, the two paradigms often use serverless platforms to span across the edge-cloud continuum). In traditional cloud computing applications, the major problem being solved for is scalability. Customers had to pay for resources allocated, not for resources consumed. Eventually, application developers could source all the compute they could possibly want, writing their applications to auto-scale on demand to utilize seemingly infinite compute. But doing so is very often extraordinarily inefficient. Additionally, the typical monolithic software stacks seen in cloud applications do not port well to edge applications, where devices are much more resource constrained.

In a nutshell, serverless computing allows developers to deploy single-purpose applications that scale based on the needs of the application. They are self-contained instances that typically run code snippets. This is in contrast to dedicated, containerized platforms which have been traditionally used in cloud applications. The serverless instances spin-up on demand and shut down when they are no longer being utilized [22]. Serverless computing enables the multi-tenancy capabilities commonly seen in edge applications while minimizing resource utilization.

Serverless in the edge-cloud continuum. (https://dl.acm.org/doi/fullHtml/10.1145/3437378.3444367)

Serverless technology emerged to fill the gap for developer needs, a pay-what-you-use model that combined many other well-developed technologies such as microservices, Function-as-a-Service (FaaS), containerization, and event driven programming [20]. As it happens, the conveniences afforded by these technologies are also favorable for edge computing applications. With serverless, edge computing applications benefit from zero-scaled environments (no allocated resources) when no compute is necessary -- a massive boon where conserving device energy is concerned. Additionally, the function oriented paradigm serverless leans in to makes it easier to adapt a multitude of hardware and software environments to a specific edge application, since edge devices can perform some computation, then offload additional handling to serverless services such as AWS Lambda. Such services are available on demand and reduce deployment times to seconds. As was alluded to, serverless abstracts away many of the complicated technical challenges associated with edge computing. It plays a key role in the cloud edge-continuum for this reason: serverless services running in the cloud can be hit directly by edge applications, bridging many of the capabilities of both realms.

Serverless is a powerful tool and has many benefits for edge computing applications. But, as with any solution, there are also drawbacks and open challenges to be overcome. For example, the ability to cold-start resources also means that the always-on connection is lost, and latency can suffer as a result. Applications must wait for devices to come online before messages can be sent, introducing significant delay which may spell disaster for certain time-critical tasks. More generally, the massive cost saving benefits come at the expense of loss of performance that mission-critical IoT use cases may find nonredeemable, such as healthcare or automotive apps. Additionally, event-driven applications may not find serverless to be the wrong choice since they typically prefer an "always-on" connection. Typically, serverless shines in situations where the IoT devices need to hit the servleress services sporadically or infrequently, or purpose in bursts followed by long periods of low activity. In such cases, the main benefit of being able to scale to zero is lost. The fact that serverless is stateless also means applications may lose context from previous communications, and lose server affinity as well, further increasing end-to-end latency. Security is also a concern, particularly where functions are deployed in environments that are less secure than their monolothic cloud counterparts. Finally, many serverless applications do not have support for advanced processing such as GPUs, FGPAs, and ASIC hardware components.

Serverless in the Real World[edit]

This section will focus on the ways researchers and companies are using serverless technology at the edge. Some focus on helping developers work faster, others on improving performance, and some on making smarter devices. We’ll look at a few main types: a research platform, content delivery networks, and Internet of Things (IoT) platforms.

IBM Research – Deviceless Serverless Platform for Edge AI This research prototype proposes a serverless edge AI platform that allows AI workflows across cloud and edge resources. The research introduces a deviceless approach where edge nodes are abstracted as cluster resources, thus removing the need for developers to manage specific devices manually. The serverless programming model includes datasets, ML models, runtime triggers, and latency deadlines.

An example of how this model was used was in personal health assistants that collected sensitive biosensor data. A base model is first trained in the cloud, then it is refined on a device using transfer learning. This ensures privacy and prompt responses. Another example includes field technicians equipped with AI-enhanced mobile tools that operate offline and sync with the cloud as needed.

CDN - Content Delivery Platforms Content network providers have introduced various serverless edge models, enabling the use of dynamic content and reduced latency. These platforms include industries like media, e-commerce, and other datas analytics. Cloudflare lets you run small pieces of code on its servers around the world. These are great for simple tasks like changing web pages or handling requests quickly. Amazon lets you run code at its edge locations using something called Lambda@Edge. This works with CloudFront and supports many programming languages. It’s useful for custom features that need to run close to users. IBM lets you run code close to users using a tool called Edge Functions. These use a system that runs lightweight programs to speed up how websites and apps respond to users.

Platforms for Serverless at the Edge: A Review(https://link.springer.com/chapter/10.1007/978-3-030-71906-7_3)

IOT Platforms IoT platforms are made for devices like sensors, cameras, and other smart tools. These platforms help manage devices, run code locally, and handle real-time data. AWS IoT Greengrass allows developers to run the same functions they use in the cloud on local devices. It works even without network access and supports many types of hardware. Azure IoT Edge is a major contender in serverless platforms. Microsoft's platform is open-source and supports many operating systems and devices. It allows developers to run containerized functions on their own equipment and connect with Microsoft’s cloud tools. FogFlow decides where to run serverless functions based on data, location, and system rules. It helps smart services move and run on the best device depending on the specifics situation.

Function as a Service (FaaS) - FaaS is another example of a serverless computing model where developers write functions that run in response to events without needing to manage the underlying servers. These functions are lightweight, they start quickly, and they only run when triggered which makes them ideal for edge environments where computing power and energy are a limited resource. Instead of keeping full applications running all the time, FaaS allows devices to perform tasks only when needed, thus, saving resources and improving response times.

Zhang et al. (2021) explored how this model can be used for real-time AI inference at the edge [19]. In the study, FaaS platform was shown to run AI tasks, such as object detection or speech recognition, while using containerized functions. The functions would launch instantly when triggered by incoming data. For example, in a personal health assistant scenario, an AI model is first trained in the cloud. Then, as the user interacts with the device, it collects biosensor data and refines the model locally by using transfer learning. This setup allows sensitive health data to be processed directly through the device, improving privacy and reducing delays, while still learning and adapting over time. This method utilizes local and cloud strengths to obtain maximum privacy and efficiency.

In conclusion, serverless edge computing is still new, but it is growing fast. Most commercial platforms are tied to their cloud providers, which can make switching hard. Open-source tools offer more freedom but need more setup. Also, tools for managing many edge devices and complex rules are still being developed. Even with the mentioned various challenges, serverless edge platforms give developers more ways to build fast, smart, and flexible applications. As the tools improve, it will become easier to create powerful systems that run wherever they’re needed.

References[edit]

  1. “Core Concepts: Getting Started: Eclipse Iofog.” Getting Started | Eclipse ioFog, iofog.org/docs/2/getting-started/core-concepts.html. Accessed 5 Apr. 2025.
  2. Apache NiFi Team. Apache Nifi Overview, nifi.apache.org/nifi-docs/overview.html. Accessed 5 Apr. 2025.
  3. Dominicbetts. “Introduction to the Azure Internet of Things (IOT) - Azure Iot.” Introduction to the Azure Internet of Things (IoT) - Azure IoT | Microsoft Learn, learn.microsoft.com/en-us/azure/iot/iot-introduction. Accessed 5 Apr. 2025.
  4. “Introduction: OpenYurt.” OpenYurt RSS, 5 Apr. 2025, openyurt.io/docs/.
  5. “Welcome to the Eclipse KuraTM Documentation.” Eclipse KuraTM Documentation, eclipse-kura.github.io/kura/docs-release-5.6/. Accessed 5 Apr. 2025.
  6. Edge, LF, and By. “What Is Baetyl?” LF EDGE: Building an Open Source Framework for the Edge., Linux Foundation, 4 June 2020, lfedge.org/what-is-baetyl/.
  7. “Welcome to the STARLINGX Documentation.” Welcome to the StarlingX Documentation - StarlingX Documentation, docs.starlingx.io/. Accessed 5 Apr. 2025.
  8. Johanson, Michael. “EdgeX Foundry Overview.” Overview - EdgeX Foundry Documentation, docs.edgexfoundry.org/4.0/. Accessed 5 Apr. 2025.
  9. “Why Kubeedge: Kubeedge.” KubeEdge RSS, kubeedge.io/docs/. Accessed 5 Apr. 2025.
  10. “Akraino.” LF EDGE: Building an Open Source Framework for the Edge., Linux Foundation, lfedge.org/projects/akraino/. Accessed 5 Apr. 2025.
  11. P. Porambage, J. Okwuibe, M. Liyanage, M. Ylianttila and T. Taleb, "Survey on Multi-Access Edge Computing for Internet of Things Realization," in IEEE Communications Surveys & Tutorials, vol. 20, no. 4, pp. 2961-2991, Fourthquarter 2018, doi: 10.1109/COMST.2018.2849509.
  12. J. John, A. Ghosal, T. Margaria and D. Pesch, "DSLs for Model Driven Development of Secure Interoperable Automation Systems with EdgeX Foundry," 2021 Forum on specification & Design Languages (FDL), Antibes, France, 2021, pp. 1-8, doi: 10.1109/FDL53530.2021.9568378.
  13. V. Villali, S. Bijivemula, S. L. Narayanan, T. Mohana Venkata Prathusha, M. S. Krishna Sri and A. Khan, "Open-source Solutions for Edge Computing," 2021 2nd International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 2021, pp. 1185-1193, doi: 10.1109/ICOSEC51865.2021.9591859.
  14. Y. Wang and J. Zhao, "Mobile Edge Computing, Metaverse, 6G Wireless Communications, Artificial Intelligence, and Blockchain: Survey and Their Convergence," 2022 IEEE 8th World Forum on Internet of Things (WF-IoT), Yokohama, Japan, 2022, pp. 1-8, doi: 10.1109/WF-IoT54382.2022.10152245.
  15. Amazon Web Services. "AWS At the Edge." *Amazon Web Services*. aws.amazon.com.
  16. "The Internet of Things: Microsoft’s view for the future." *Compact*, www.compact.nl/articles/the-internet-of-things-microsofts-view-for-the-future. Accessed 22 Apr. 2025.
  17. "Navigating the Hybrid Cloud Horizon: Google Distributed Cloud Edge." *The Futurum Group*, futurumgroup.com/insights/navigating-the-hybrid-cloud-horizon-google-distributed-cloud-edge/. Accessed 22 Apr. 2025.
  18. "NVIDIA is Poised to Dominate the Edge with its GPU Strengths and Partnership Strategy." *ITC Blogs / Current Analysis*, 31 Oct. 2019, itcblogs.currentanalysis.com/2019/10/31/nvidia-is-poised-to-dominate-the-edge-with-its-gpu-strengths-and-partnership-strategy/. Accessed 22 Apr. 2025.
  19. Zhang, C., Lin, W., Zhang, Y., Pan, Y., & Tian, Y. (2021). “FaaS Inference: Enabling Serverless Deep Learning Inference at the Edge.” Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI). https://doi.org/10.24963/ijcai.2021/320.
  20. M. S. Aslanpour et al., "Serverless Edge Computing: Vision and Challenges," in Proceedings of the Australasian Computer Science Week Multiconference (ACSW '21), Dunedin, New Zealand, 2021, pp. 1-10, doi: 10.1145/3437378.3444367.
  21. A. Hall and U. Ramachandran, "An execution model for serverless functions at the edge," in Proceedings of the International Conference on Internet of Things Design and Implementation (IoTDI '19), Montreal, QC, Canada, 2019, pp. 225-236, doi: 10.1145/3302505.3310084.