EDGE COMPUTING, CHANGING THE CLOUD TO FOG


Nearly every new technology is disruptive to the extent that it’s expected to replace older technologies. Sometimes, as with the cloud, old technology is simply rebranded to make it more appealing to a new market. Let’s remember that cloud computing had previously existed in some form. At one stage it was called on-demand computing, then it became application-service provisioning.
Now there’s edge computing, which some people are also calling fog computing. Some industry commentators feel it will replace the cloud as an entity. Yet the question is, will it really? The same viewpoint emerged when television was invented. TV was expected to be the death of radio. Yet people still tune into radio stations by their thousands every day. Of course, some technologies are disruptive in that they change people’s habits and their way of thinking. Once, people enjoyed listening to Sony Walkmans, but today, most folks listen to their favorite tunes on smartphones.
Complementary Models
So the two approaches may in fact end up complementing each other. The argument for bringing data computation back to the edge comes down partly to increasing data volumes, which lead to ever slower networks. Latency is the culprit. Data is becoming ever larger. So there will be more data per transaction, more video and more sensor data. Virtual and augmented reality will play an increasing role in its growth, too. With this growth, latency will become more challenging than it was previously. Furthermore, although it may make sense to put data close to a device such as an autonomous vehicle to eliminate latency, remote storage via the cloud remains critical.
The cloud can still deliver certain services, such as media and entertainment. It can also back up data and share data emanating from a vehicle for analysis. From a broader perspective, creating several smaller data centers or disaster-recovery sites may reduce economies of scale and make operations less efficient.
Doing so may mitigate latency, but the data may also reside in the same circles of disruption, with devastating consequences when disaster strikes. So for the sake of business continuity, some data storage or processing may have to occur elsewhere, away from the network edge. In the case of autonomous vehicles, and because they must operate regardless of whether a network connection exists, it makes sense for the vehicle to perform certain types of computation and analysis. But the cloud still backs up much of this data when a connection is available. So, the edge and cloud computing are likely to follow a hybrid approach rather than a standalone one.
Edge to Cloud
Saju Skaria, senior director at consulting firm TCS, offers in a LinkedIn Pulse article (“Edge Computing vs. Cloud Computing: Where Does the Future Lie?”) several examples where edge computing could prove advantageous. He certainly doesn’t think that the cloud will disappear: “Edge Computing does not replace cloud computing....In reality, an analytical model or rules might be created in a cloud then pushed out to Edge devices…and some [of these] are capable of doing analysis.” He then goes on to talk about fog computing, which involves data processing from the edge to a cloud. He suggests people should remember data warehousing, too, because it handles “the massive storage of data and slow analytical queries.”
Eating the Cloud
Despite this argument, Gartner’s Thomas Bittman seems to agree that the “Edge Will Eat the Cloud”: “Today, cloud computing is eating enterprise data centers, as more and more workloads are born in the cloud, and some are transforming and moving to the cloud….But there’s another trend that will shift workloads, data, processing and business value significantly away from the cloud. The edge will eat the cloud…and this is perhaps as important as the cloud computing trend ever was.”
Bittman also says, “The agility of cloud computing is great—but it simply isn’t enough. Massive centralization, economies of scale, self-service and full automation get us most of the way there—but it doesn’t overcome physics—the weight of data, the speed of light. As people need to interact with their digitally-assisted realities in real-time, waiting on a data center miles (or many miles) away isn’t going to work. Latency matters. I’m here right now and I’m gone in seconds. Put up the right advertising before I look away, point out the store that I’ve been looking for as I drive, let me know that a colleague is heading my way, help my self-driving car to avoid other cars through a busy intersection. And do it now.”
Data Acceleration
He makes some valid points, but he falls into the argument that has often been used about latency and data centers: they must be close together. Wide-area networks, however, will always be the foundation of both edge and cloud computing. Also, Bittman clearly hasn’t come across data-acceleration tools such as PORTrockIT and WANrockIT. Although physics is certainly a limiting and challenging factor that will always be at play in networks of all kinds, including WANs, you can now place your data centers at a distance from each other without increasing data and network latency. Latency can be mitigated, and its impact can be reduced no matter where the data processing occurs and no matter where the data resides.
So let’s avoid viewing edge computing as a new solution. It’s but one solution, and so is the cloud. Together, these two technologies can support each other. One commentator says in response to a Quora question about the difference between edge computing and cloud computing, “Edge computing is a method of accelerating and improving the performance of cloud computing for mobile users.” So the argument that edge will replace cloud computing is a foggy one. Cloud computing may at one stage be renamed for marketing reasons, but it’s still here to stay.

Subscribe to Industry Era



 

Events