“Fog nodes are often at an ideal level of the network — between endpoint sensors, actuators and the cloud — to serve as critical storage resources for the Internet of Things,” says Helder Antunes, chairman of the OpenFog Consortium and a senior director at Cisco, which is credited with coining the term fog computing. “A fog node the size of a shoebox — located on a factory floor, in a vehicle or as part of a smart building — can host tens of terabytes of reliable, high-performance, solid storage, reachable by endpoint things in less than a millisecond.”
Cloud access for that data, by contrast, would take about 100 milliseconds, he says.
Storage on fog nodes, rather than in the cloud, also supports redundancies and reduces the chance of data loss, says Antunes: “Using a distributed, hierarchical storage architecture including one or more levels of fog-based storage can improve the performance, security, reliability, efficiency and cost of mission-critical IoT networks.”
Purdue University Begins Experimenting with Fog Computing
Mung Chiang, dean of the College of Engineering at Purdue University, co-founded the OpenFog Consortium in part to facilitate the adoption of fog computing.
One of the consortium’s major initiatives has been creating the OpenFog Reference Architecture for fog computing, which the IEEE Standards Association officially adopted in June. The architecture is based on eight core technical principles: security, scalability, openness, autonomy, RAS (reliability, availability and serviceability), agility, hierarchy and programmability.
Much like the IoT itself, fog computing will take at least a decade to mature, Chiang predicts: “It will continue to be a journey and an evolution.”