In the last few years, the uptake of internet-connected devices has grown exponentially, and it will not slow down in the coming years. According to Gartner, by 2023, the average CIO will be responsible for more than three times the endpoints they managed in 2018. However, supporting such a surge would require scaling-up of the cloud infrastructure and substantial provision of network capacity, which may not be economically feasible.
In such cases, edge computing could emerge as a solution as the required resources, such as computing, storage, and network can be provided closer to the data source for processing.
Businesses are looking for insights that are near-real-time and actionable, which is fueling edge computing’s uptake across industries. Edge computing’s benefits are well known, and in a previous article, I illustrated the benefits and some use cases.
It is only a matter of time before edge becomes mainstream, as demonstrated by a recent IDC survey that found 73% of respondents chose edge computing as a strategic investment. The open-source community, cloud providers, and telecom service providers are all working towards strengthening the edge computing ecosystem, accelerating its adoption and the pace of innovation.
With such tailwinds in favor, web app developers should focus on having an edge adoption plan in place to be more agile and leverage edge’s ability to improve user engagement ratio.
Benefits like near real-time insights with low latency and reduced cloud server bandwidth usage bolster the uptake of edge computing across industries for web applications. Adopting an edge-computing architecture for website applications can increase productivity, lower costs, save bandwidth and create new revenue streams.
I have found there are four critical enablers for edge computing that help web developers and architects get going.
The edge ecosystem comprises multiple components like devices, gateways, edge servers or edge nodes, cloud servers, etc. For web applications, edge computing workload should be agile enough to run on edge ecosystem components, depending on the peak load or availability.
However, there could be specific use cases like detecting poaching activity via drone in a dense forest with low or no network connectivity, which demands developing applications native to the edge devices or gateways.
“Adopting cloud-native architectural patterns like microservice or serverless provide application agility. Cloud native’s definition as explained by the Cloud Native Computing Foundation (CNCF) supports this argument: ‘“Cloud native technologies empower organizations to build and run scalable applications in public, private, and hybrid clouds.’”
Features such as containers, service meshes, microservices, immutable infrastructure, and declarative application programming interfaces (APIs) best illustrate this approach. These features enable loosely coupled systems that are resilient, manageable, and observable. They allow engineers to make high-impact changes frequently and with minimal effort.”
The foremost step in edge computing adoption would be to use a cloud-native architecture for the application or at least for the service that is to be deployed at the edge.
Cloud Service Providers (CSPs) offer services like computing and storage local to a region or zone, which act like mini/regional data centers managed by CSPs. Applications or services adhering to the “develop once and deploy everywhere” principle can be easily deployed on this edge infrastructure.
CSPs like AWS (outpost, snowball), Azure (edge zones), GCP (Anthos), and IBM (cloud satellite) have already extended some of their fully managed services to on-premises setup. Growth stage startups or enterprises can easily leverage these hybrid cloud solutions to deploy edge solutions faster and for greater security as they can afford the associated cost.
For an application running on wireless mobile devices that rely on cellular connectivity, new cellular 5G technology can provide a considerable latency benefit. In addition, CSPs are deploying their compute and storage resources closer to the telecom carrier’s network, which mobile apps like gaming or virtual reality can utilize to enhance the end-user experience.
Web applications like online shopping portals can deliver a better customer experience with reduced latency when empowered with such services. For example, applications can benefit more by moving cookies manipulation logic to CDN edge processing instead of hitting the origin server. This move could prove effective when there is a heavy surge of traffic during events like Black Friday and Cyber Monday.
Moreover, such a method could also prove effective for running A/B testing. You can serve a fixed subset of users with an experimental version of the application while giving the rest of the participants a different version.
The diversity of neural network models and model frameworks has grown multifold in the last few years. This has encouraged developers to use and share neural network models on a broad spectrum of frameworks, tools, runtimes, and compilers. But before running a standard AI/ML model format on various edge devices, developers and entrepreneurs should look for some standardization to counter edge’s heterogeneity.
In working with dozens of startups, I have found that the best business decisions sometimes depend on early adoption of emerging technologies like edge computing for better impact on customers.
However, adopting emerging technology takes forethought and planning to be successful. By following the enablers above, you are well-positioned for seamless and sustainable integration of edge computing to develop web-based applications.
Image Credit: Ketut Subiyanto; Pexels; Thank you!
The post Adopting Edge Computing for Web Apps – 4 Key Enablers appeared first on ReadWrite.