Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Edge Computing Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Emerging Research Directions
(section)
Page
Discussion
British English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== 7.2 Edge for AR/VR == === Introduction === As Edge Computing expands into new opportunities and utilizations in our society, so too does it reach Augmented Reality (AR) and Virtual Reality (VR) devices. From simple visualization needs to advancements in surgery, society has only begun to grasp the possibilities development of such technology may hold. === AR, What Is It? === Augmented Reality (AR) is a technology that enhances a user’s real-world experience through digital features. AR uses technology to implement the experience such as object recognition, object tracking, and content rendering. To recognize objects, objects must first be identified and analyzed, then AR imposes a digital image overlayed on the object. For AR applications that use object tracking, real-world objects are identified and tracked, then a digital image placed over them. For either of these to happen, virtual content must be rendered onto the real-world via content rendering. === Types of AR === For AR, there is not a one size fits all AR application. AR applications differ in their approach, need, and usability in different industries. From marker dependency to simply projecting the content onto a surface, AR applications come in wide ranges to fit the differing needs. [[File:Types_of_AR_Apps.jpg]] Different Types of AR Applications [1]. '''Marker Based AR:''' * By using objects and images as markers, Maker Based AR uses these preidentified markers as triggers for digital elements. Depending on the type of marker, the AR application places the interactive content at that location. '''Markerless AR:''' * Unlike Marker Based AR, Markerless AR does not rely on markers to engage with its content. Utilizing sensor data, such as GPS or cameras, the position of the user can be determined and virtual content placed accordingly. '''Location Based AR:''' * Location Based AR is a form of Markerless AR, as it also utilizes the GPS of the user to implement the digital content. '''Projection Based AR:''' * Projection Based AR needs no location or marker to implement its content, and rather projects the content onto a surface for interaction. '''Outlining Based AR:''' * Outlining Based AR uses edges and boundaries employed by image recognition to identify objects then layers the digital content atop them. As many of these types of AR applications differ in implementation, affect, and inner workings, they can be applied to different sectors in society, commercial applications, and have been developed to fit distinct research project needs in scientific development. These specifications are expanded upon in the table below. {| class="wikitable" style="width:100%; text-align:left;" |+ Types of AR Applications [5] |- ! Type ! Sector ! Applications ! Scientific Development |- | Marker Based | Museums and Exhibits, Retail and Marketing, Maintenance and Repair | Interactive Exhibitions, Augmented Reality Retail Store Catalog, Complete Repair Tasks, Aircraft Maintenance | The Museum of London's Street Museum App, Virtual Try-On Technology in Retail |- | Markerless | AR Navigation Apps, AR Games, AR Furniture Apps, AR Social Media Filters, AR Maintenance and Repair | Pokémon Go, Instagram Filters, Object Recognition to Identify Specific Elements | An Outdoor Augmented Reality Mobile Application Using Markerless Tracking |- | Projection Based | Paleontology, Industrial Training, AR Furniture Apps, Maintenance, Entertainment and Gaming | Projection of Computer-Generated Information onto Fossil Specimens | The Virtual Mirror |- | Outlining Based | Manufacturing, Education, Architecture, Medical Procedures | Machine Maintenance, Surgeries, Assembling Parts | A Conceptual Framework for Integrating Building Information Modelling & Augmented Reality (3D Architectural Visualizations) |} === AR Frameworks and Platforms === {| class="wikitable" style="width:100%; text-align:left;" |+ Features of Various AR Platforms and Frameworks |- ! Platform ! Positive Features ! Negative Features |- | ARBlocks | Good Visualization and Tracking | Relatively Slow |- | ARCore | Good Understanding of the Environment, Good for Gaming | AR Tracking is not consistent, Not Stable on Older Generation Devices |- | ARKit | Good AR Rendering Devices, Supports Old Devices, Free for Xcode Features, Instant AR via LiDAR | Charges for App Distribution, Only for iOS and iPad OS |- | AR-media | Offers Cloud Based Services for AR, Easy AR Project Development Without Coding Skills, Free Registration | Depth mapping is not Available Yet |- | ARToolKit | Processing Happens in Real Time, Fast AR Placing is Possible, Free and Open Source | Can be Used Only with Image Markers |- | ARWin | Interact in the Virtual Environment, Can be Categorized Based Geometry | Careful Calibration of Markers is Required, Camera Resolution Constrains More Accurate Tracking |- | Bright | Comfortable AR HMDs, Text to Speech, Local Processing | Slow Processing as it is Done Locally, Improper Control Over Functionality |- | CoVAR | Eye-Gaze, Head-Gaze, and Hand Gestures Based Interaction | Simulator Sickness, Only Head Tracking is Visible |- | DUIRA | Create a Realistic Virtual Digital Environment that Reflects Real-World Experiences | Requires Multiple AR Markers, Angle Changes in AR Markers Leads to Create Different Effects |- | KITE | Quickly Assembled from Existing Hardware, Initialization is Simple | Algorithm for Mesh Reconstruction is not accurate and Stable, Only Available on a Desktop System |- | Nexus | Spatial-aware Applications, Easy and Common Infrastructure | Needs to Stores and Manages the Location Information Globally, Harder Location-Aware Communication Concepts |- | Vuforia | Most widely Used, Tracking is Flicker-less | Paid License is Required for advance AR Features |- | WARP | A well defined Layered Structure architecture | Relatively Slower Processing |- | Wikitude | Geo-based Tracking Available | License Needed for the App Development |} === VR, What Is It? === Virtual Reality (VR) is a technology that fully immerses users in a virtual world through a three dimensional artificial environment. === Features of VR === VR can be broken down into two main features: '''Immersion''' and '''Interaction'''. '''Immersion:''' * The degree to which the user is immersed in a virtual environment depends on the capability of the system and degree to which the application is designed for. Ranging from total immersion, completely engulfing the user in a virtual environment, to non-immersive, with the user aware of their surroundings and accessing the application through a mobile or computer device. '''Interaction:''' * The degree to which a user can alter the virtual world is referred to as interaction. Full interaction is made possible through VR Systems, such as head mounted goggles, special gloves, and even wired clothing. In this manner, users can fully interact with the virtual world, viewing it from contrasting angles and even reshaping the environment. On the other end of the spectrum, little to no interaction is available when users see a 3D movie. Here only one view is possible, with the story unable to be interacted with by users [2]. === Types of VR === Virtual Reality (VR) Systems can be broken down into three different main types. As depicted in Figure 3 below, the three main types consist of Non-Immersive, Semi-Immersive, and Immersive. [[File:Types_of_VR_Systems.jpg]] Figure 3: Types of VR Systems [2]. '''Non-Immersive VR''' * Non-Immersive VR is not only the least expensive of the different types of VR systems, but also has the least amount of immersion. Running on a desktop or mobile device, non-immersive VR uses Computer Aided Design (CAD) Systems, and is screen based. '''Semi-Immersive VR''' * In Semi-Immersive VR, the user is still aware and can see themselves. An example of such a system is Cave Automatic Virtual Environment (CAVE). Here, a system of differing projections, speakers, and googles assists in the immersive experience. '''Fully Immersive VR''' * The most immersive VR, Fully Immersive VR, allows the user to be completely engulfed in the virtual reality experience. This is accomplished by Head-Mounted Displays (HMDs) and data gloves. {| class="wikitable" style="width:100%; text-align:left;" |+ Types of HMD Devices |- ! Type ! About ! Devices |- | PC VR | Tethered with PC | Oculus Rift, HTC Vive |- | Console VR | Tethered with a Game Console | PlayStation VR |- | Mobile VR | Untethered with PC/Console but with a Smartphone Inside | Samsung Gear VR, Google Daydream |- | Wireless | All-in-One HMD Device | Unreleased Intel Alloy |} === Challenges Faced by AR/VR === Today’s AR/VR applications suffer from a number of challenges, such as: * Processing Speed * High Bandwidth Requirements * Low Latency Requirements * Short Battery Life/High Battery Consumption * Limited Computational Capability * Limited Ability to Track Eye Movements and Facial Expressions * Rendering Issues AR/VR devices need higher processing speed to enable the devices to render 3D graphics, have the ability to track user movements, and to be able to process the sensor data without delays. It is essential that devices have a frame rate of, at the bare minimum, ninety frames per second. This ensures a smooth and believable experience is sustained. Included in this is the need for a high refresh rate. Without these characteristics, the quality of the video/digital experience is slower and of lower quality. This results in motion sickness experienced by the users. Processing speed directly affects the responsiveness and quality of the AR/VR experience. The higher the processing speed, the better the AR/VR device can create a more enjoyable experience, as smoother, more accurate, and faster rendering is possible. It is essential that AR/VR devices have high bandwidth due to the need to transmit data. As both high resolution images and videos are transmitted to both eyes in the AR/VR experience, they therefore require high bandwidth to sustain this ability. Included in the need for high bandwidth is motion tracking and interaction, and low latency. Minimal delay/Low latency results in a smooth and comfortable experience. Even small delays in visual/audio feedback can cause motion sickness or disrupt the sense of immersion. As many devices are portable and/or smaller size, AR/VR devices suffer from limited computational capacity. The size for computational ability trade-off results in a device that isn’t heavy for the user, but often cannot sustain necessary processing power. The processing power demands typically result in high battery consumption, lowering the battery life of the device. Due to the need for high computational power, advanced graphics capabilities, and accuracy of real-time depth sensing and perception, many applications currently face rendering issues. Such errors and delays in the rendering process adversely impact the user’s experience, as these issues lead to insufficient and inaccurate data [5]. === Edge Computing to the Rescue === Edge Computing brings computational capabilities closer to the consumer of the data. In addition, Edge Computing provides a number of benefits, directly aiding in issues that plague AR/VR devices. Some of these benefit include- * Lessened Network Load * Faster Data Transfer * Decreased Latency * Better Bandwidth * Lowered Energy Consumption * Higher Processing Capability * Faster Responsiveness One of the biggest benefits of Edge Computing is its ability to bring computational abilities and power closer to the users. By utilizing this for computational intensive AR/VR applications, the burden of this is outsourced to Edge Computing. In addition to computational power, Edge Computing can offload workload to distributed clusters, reducing network bottlenecks [7]. Furthermore, Edge Computing provides faster data transfer, better bandwidth, and significantly lessens latency. Factors that influence rendering in a user's virtual space include head rotation, body movement, control commands coming from the user, and virtual space changes. By employing Edge Computing, as depicted below, virtual space changes can be registered by the edge and cloud servers [3]. This would vastly improve speed, responsiveness, and help remove motion sickness some users experience. [[File:VR_Factors_that_Affect_Rendering.jpg]] Factors Determining When and What Rendering Needs to be Performed in VR [3]. As current AR/VR programs are unable to provide interaction of numerous mobile users due to current wireless bottlenecks, by deploying Edge Computing into the mix, we can only image what the future for these devices has in store. Once Edge Computing is integrated into these systems, we could feel like we are actually walking on the Moon, exploring the depths of the ocean, or whatever your imagination has in store. === References === [1] J. S. Devagiri, S. Paheding, Q. Niyaz, X. Yang and S. Smith, “Augmented Reality and Artificial Intelligence in industry: Trends, tools, and future challenges,” Expert Systems with Applications, vol. 207, November 2022, doi: 10.1016/j.eswa.2022.118002. [2] R. Al-musawi and F. Farid, “Computer-Based Technologies in Dentistry: Types and Applications,” Journal of Dentistry (Tehran, Iran), vol. 13, pp. 215-222, June 2016. [3] X. Hou, Y. Lu and S. Dey, "Wireless VR/AR with Edge/Cloud Computing," 2017 26th International Conference on Computer Communication and Networks (ICCCN), Vancouver, BC, Canada, 2017, pp. 1-8, doi: 10.1109/ICCCN.2017.8038375. [4] Eswaran M. and Raju Bahubalendruni M. V., “Challenges and opportunities on AR/VR technologies for manufacturing systems in the context of industry 4.0; A state of the art review,” Journal of Manufacturing Systems, October 2022, vol. 65, pp. 260-278, doi: 10.1016/j.jmsy.2022.09.016. [5] C. E. Mendoza-Ramírez, J. C. Tudon-Martinez, L. C. Félix-Herrán, J. J. Lozoya-Santos and A. Vargas-Martínez, “Augmented Reality: Survey,” Applied Sciences 13, September 2023, vol. 13: 10491, doi: 10.3390/app131810491. [6] S. A. Jebamani and S. G. Winster, "A Study of Mobile Edge Computing in AR/VR Applications," 2022 International Conference on Power, Energy, Control and Transmission Systems (ICPECTS), Chennai, India, 2022, pp. 1-10, doi: 10.1109/ICPECTS56089.2022.10047234. [7] S. Sukhmani, M. Sadeghi, M. Erol-Kantarci and A. El Saddik, "Edge Caching and Computing in 5G for Mobile AR/VR and Tactile Internet," in IEEE MultiMedia, vol. 26, no. 1, pp. 21-30, 1 Jan.-March 2019, doi: 10.1109/MMUL.2018.2879591.
Summary:
Please note that all contributions to Edge Computing Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Edge Computing Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)