Blog

Back to the Edge

Back to the Edge
Userlevel 7
Badge +35
This post was authored by Richard Arsenian, Senior Staff Solutions Architect

Ok… Time circuits… On. Flux capacitor… Fluxing. Engine… Running.

In this multipart series, we’ll be taking you on a journey “Back to the Edge,” covering edge computing—one of the hottest topics in the enterprise Internet of Things (IoT) space today.

Going back to the edge raises a lot of questions and uncertainty, and we may wonder if the hype simply comes from Silicon Valley tech giants trying to reinvent the wheel once again.

But the edge computing paradigm shift is happening right before our eyes, and it’s all about Big Data. For the first time ever, we’re collecting the world around us—from what we interact with in our daily routines to machines that are developing personae of their own.

These advanced machines are now responsible for driving, flying, and moving us around the globe, but they also represent individual datacenters in new contexts: a datacenter in the sky (drones)… perhaps even a datacenter on wheels (self-driving cars).

So why are we going back to the edge? And what happens to the cloud?

To understand where we’re going, first we must understand where we’ve been.

Marty McFly: Hey doc, you’re not going to believe this—we have to go to 1955.
Dr. Emmett Brown: I don't believe it!
—Back to the Future Part II


The Mainframe


From the 1950s into the 1960s, mainframes represented a centralized approach to large-scale computing, whereby processing, data storage applications, and services were hosted centrally inside large cabinets. Users connected to these mainframes using terminals that we could compare to today’s thin clients.

During this period, we experienced human-to-machine interaction. Users created their own data, manually entered this data into terminals, and then executed their relevant queries as needed. But giant mainframes were physically large, prohibitively expensive, and ultimately not as powerful as users increasingly needed them to be.

Image Source: Lawrence Livermore National Laboratory, via Wikimedia Commons

PCs and Distributed Computing


During the 1990s, distributed computing emerged as a new architecture within the enterprise, involving a client-server relationship that partitioned tasks and services between workloads. This architecture aggregated a series of smaller servers (x86) to produce more processing power (CPU), RAM, and storage than the centralized mainframe could offer. Due to their popularity, performance, and economics, the personal computers (PCs) that had gained popularity in the 1980s functioned as client devices requesting services and resources from a server over a network. But scaling these systems proved to be very expensive across organizations.

Back to a Centralized Model: Cloud Computing and the Mobile Era


With its surging popularity in the 2000s, the cloud computing paradigm caused a revolution in the consumption of specialized platforms (CRM, social networking, collaboration) and the growing expectation that everyone can access these platforms anytime, anywhere, and from any device. This era saw mobile devices (phones, tablets, notebooks) mature, and there was a revolution in broadband technology. The Internet of people reached full momentum, and the size of our datasets began to increase once again, with zettabytes of data created every 60 seconds.

And then the data just kept growing.


The Internet of Things (IoT)


With the relatively recent rise of the IoT, the Internet has evolved from user-generated static content to real-world, machine-generated Big Data. The IoT connects our everyday devices (household appliances, thermostats, wearables), objects, machines, people, and even animals to the Internet, where the data they provide can drive intelligent decisions, improvements, and insights into behavior patterns.

Machine-to-machine (M2M) communication over an IP network captures unstructured data from billions of sensors to provide information, automation, and control. The resulting streamlined embedded systems ingest data at a network’s edge and replicate that data back to a centralized cloud for processing and analysis, in a new take on the mainframe model.

It’s now time to set our time circuits to the present.


Intelligent Edge Computing


As network edges become more sophisticated and requirements for real-time data processing and decision-making expand (for example, in applications such as driverless cars, avionics, and so on), the centralized cloud computing delivery model becomes more obsolete each day, because it’s constrained by network latency and connectivity and its sensors and actuators produce an astronomical amount of data.

Compared to the few million servers estimated to be in the public cloud, the edge today encompasses billions of devices, including both business IoT and the wearables and consumer gadgets we've all grown to love over the years.

Major social media sites can accumulate hundreds of terabytes of data in a day, yet the aerospace industry alone could soon surpass not only these sites, but the entire consumer industry.

In comparison, at a recent Paris airshow, Bombardier showcased its C Series jetliner that carries Pratt & Whitney’s Geared Turbo Fan (GTF) engine. These engines are fitted with 5,000 sophisticated sensors that generate up to 10 GB of data per second—a single twin-engine aircraft with an average 12-hour flight time can produce up to 844 TB of data.

With the 7,000 GTF engines rumored to be on order, Pratt could potentially download zettabytes of data once all their sensors are in the field. Amazing!—especially as more and more companies’ sensors and actuators come online.

Requirements for real-time processing, analytics, and AI are here.


From today onward, we can expect to see the rise of machines that are sophisticated and intelligent, equipped with systems responsible for autonomous movement, decision making, and creating a persona they can use to learn about their surroundings as they assist humankind.

Human intelligence and intellect fuse in these machines as they are trained to sense, infer, and act like humankind using artificial intelligence technology that replicates the inner workings of the human brain in silicon as artificial neural networks (ANN).

In typical computer programming, we’ve been accustomed to using logical expressions and statements like “IF, THEN, ELSE, WHILE, DO,” and so on to solve problems. However, with industrial IoT applications, it’s quite likely that we wouldn’t be able to—or even know how to—write the programs required to model the decision-making capabilities we need. Even if we could begin to write such programs, they would be inordinately complex and nearly impossible to get right, let alone troubleshoot.

Image Source: BruceBlaus, Blausen 0657 MultipolarNeuron, CC BY 3.0

AI builds on the concept of biological neural networks found within the most powerful computing engine known—the human brain. Our brains contain trillions of interconnected pathways of neurons and axons. Neurons pass information between each other as they take inputs from our sensory functions, psychological state, or external stimuli. These electrical impulses quickly travel through our neural networks; from here, our brains decide whether to act or simply disregard the input.

In contrast to the human brain’s natural neural network, an ANN contains a large number of artificial neurons called “units.” These units are arranged in a series of input, hidden, and output layers:
  • Input layer: Contains those units (artificial neurons) that receive input from the outside world. The ANN uses these inputs to learn and to obtain the information it needs to process.
  • Hidden layer: Between the input variables and the output layers, the hidden layer connects all input nodes to the output layer and transforms the input into a meaningful output. There can be more than one hidden layer, as you can often see in deep learning models.
  • Output layer: Using the information derived from the hidden layer, the output layer can make a prediction or classification.
One of the many forms and applications of AI currently in development, machine learning describes the idea that when machines encounter a series of datasets, they can learn, think, and act for themselves based on the data. We can break machine learning down as follows:
  • Supervised learning: You provide a series of input (x) and output (y) variables as well as an algorithm or function to determine the mapping and relationships between these variables.
  • Unsupervised learning: We invoke modelling techniques to understand the underlying structure of data where we only have input (x) data and no output (y).
  • Semisupervised learning: Works in situations where our dataset only contains outputs (y) for some of the input (x) data (for example, contacts in your social network that are categorized as friends, family, or coworkers, and others that do not have a group defined). This technique uses a combination of supervised learning to create predictions and unsupervised learning to discover and learn the dataset
Complex autonomous systems and IoT applications are becoming mission-critical for ensuring the safety and movement of humankind. Their ability to sense, infer, and act within a hundredth of a second is vitally important, and we can only achieve this level of responsiveness with platforms and hardware optimized for agility.

A recent example we’ve all become familiar with is the autopilot system developed by Tesla. Based on the NVIDIA Drive PX Platform, it’s equivalent to 150 Macbook Pros, optimized for AI and deep learning, and capable of delivering 320 trillion deep learning operations per second (TOPS).

Image Source: NVIDIA Taiwan, NVIDIA Drive PX, Computex Taipei 20150601, CC BY 2.0

Think about how this self-driving car would behave if it confronts an obstacle while driving down a road. It must decide to either stop, detour to the left or right, or take some other action. But in order to do any real-time decision making, the car must first transmit the images from its camera sensors to the cloud, where machine learning and contextual or situational awareness takes place. Then that information comes back from the cloud and instructs the vehicle appropriately.

Image Source: Ian Maddox , Tesla Autopilot Engaged in Model X, CC BY-SA 4.0

But by that point it’s too late.

That is why, for these sophisticated machines and industrial applications, we are forced back to the edge, now coupled with a distributed model.

We will see the next iteration aggregate and dynamically cluster multiple intelligent edge devices to handle data and to judge actions from moment to moment, informing other edge devices of situations well before they arise.

For example, in situations where we have a group of self-driving cars, a vehicle miles ahead could sense an obstacle and proactively alert other self-driving cars to reduce their speed or brake well in advance. Could this capability dynamically improve road safety?


Edge devices also have the opportunity not only to develop their own intelligence using machine learning or AI models or personae, but also to share the results and intelligence they’ve acquired with other edge devices in real time, thus becoming a constellation of ANNs. Based on feedback from the edge, we can refine these models in the cloud before redistributing improved models back to the edge.

Fusing the edge and the cloud



The requirements and constraints posed by the edge dictate that it prioritizes agility over performance. Because a single edge device can’t contain the compute power and storage of the enterprise datacenter, it requires a platform that can intelligently cluster aggregates of edge devices but more importantly a platform that:
  • Provide local execution: Ingest data at speed and volume on the edge so we don’t overwhelm the cloud or mobile networks.
  • Provide contextual awareness: Correlate and enrich user-specific data like location-based technology, user intent, proximity, movement, and sensors to make better decisions.
  • Provide situational awareness: Connect real-time situational data such as environmental elements and events to what is happening around us at any given time.
  • Process and execute predictive analytics and machine learning models: Predict threats and opportunities using neural network models.
  • Act based on prescriptive analytics: Take the next best action using automated rules and adaptive processes learned from both the core datacenter and other edge devices.
With this kind of intelligent platform at the edge, the cloud can become a repository for long-term storage, as well as for in-depth analytics coupled with AI and deep machine learning functionality. By aggregating data from its groups of edge devices to better understand data patterns and trends, a policy orchestration engine based in the cloud can then update its edge devices with new personae for sensing, inferring, and acting.

Subscribe to the NEXT Community for more posts on the Nutanix strategy for “things” at the edge.

Marty: Hey, Doc, you better back up, we don't have enough road to get up to 88.
Doc Brown: Roads? Where we're going, we don't need... roads.



©️ 2018 Nutanix, Inc. All rights reserved. Nutanix, and the Nutanix logo are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s).

1 reply

Badge +1
Great post - looking forward to reading more!

Reply