TeleTec Electronics

View Original

AI Data Processing at the Edge Reduces Costs, Data Latency

A race is on to accelerate artificial intelligence (AI) at the edge of the network and reduce the need to transmit huge amounts of data to the cloud. 

The edge, or edge computing, brings data processing resources closer to the data and devices that need them, reducing data latency, which is important for many time-sensitive processes, such as video streaming or self-driving cars.

Development of specialized silicon and enhanced machine learning (ML) models is expected to drive greater automation and autonomy at the edge for new offerings, from industrial robots to self-driving vehicles. 

Vast computer resources in centralized clouds and enterprise data centers are adept at processing large volumes of data to spot patterns and create machine learning training models that “teach” devices to infer what actions to take when they detect similar patterns. 

But when those models detect something out of the ordinary, they are forced to seek intervention from human operators or get revised models from data-crunching systems. That’s not sufficient in cases where decisions must be made instantaneously, such as shutting down a machine that is about to fail. 

“A self-driving car doesn’t have time to send images to the cloud for processing once it detects an object in the road, nor do medical applications that evaluate critically ill patients have leeway when interpreting brain scans after a hemorrhage,” McKinsey & Co. analysts wrote in a report on AI opportunities for semiconductors. “And that makes the edge, or in-device computing, the best choice for inference.”

That’s where AI data processing at the edge is gathering steam.

Overcoming Budget and Bandwidth Limits

As the number of edge devices increases exponentially, sending high volumes of data to the cloud could quickly overwhelm budgets and broadband capabilities. That issue can be overcome with deep learning (DL), a subset of ML that uses neural networks to mimic the reasoning processes of the human brain. This allows a device to self-learn from unstructured and unlabeled data. 

With DL-embedded edge devices, organizations can reduce the amount of data that needs to be sent to data centers. Similarly, specialized ML-embedded chips can be taught to discard raw data that doesn’t require output activity — for example, sending video data to the cloud when it meets certain criteria, such as capturing only a human image and discarding images of birds and dogs.

“There isn’t enough bandwidth in the world to just collect data and send it to the cloud,” said Richard Wawrzyniak, a senior market analyst with Semico Research Corp. “AI has advanced to the point that data crunching resides in the device and then sends whatever data points are relevant to somewhere to be processed.”


Deciding What Is Near and Dear

Organizations face the challenge of developing architectures that differentiate between data that can be processed at the edge versus that which should be sent upstream. 

“We are seeing two dimensions,” explained Sreenivasa Chakravarti, vice president of the manufacturing business group at Tata Consultancy Services (TCS). “Most organizations are trying to segregate the data and talking about how to keep what is nearest to you at the edge and what to park in the cloud.” This requires having a cloud-to-edge data strategy.

Chakravarti said he expects autonomous edge capabilities to be used in more production lines, not just in self-driving vehicles. The challenge is in synchronizing autonomous activity in a larger ecosystem, he said, as manufacturers want to increase the throughput of their operations, not just individual systems.

Similarly, many autonomous systems must incorporate some type of human interface. 

“Before the automotive industry is ready to let AI take the wheel, it first wants to put it in currently produced cars with lots of driver-assist technology,” wrote ARC Advisory Group senior analyst Dick Slansky. “AI lends itself very well to powering advanced safety features for connected vehicles. The driver-assist functions embedded into the vehicles coming off production lines today are helping drivers become comfortable with AI before the vehicles become completely autonomous.”

The Future of AI Data Processing at the Edge

Almost every edge device shipping by 2025, from industrial PCs to mobile phones and drones, will have some type of AI processing, predicted Aditya Kaul, research director with market research firm Omdia|Tractica. 

“There is a host of other categories where we haven’t seen activity or visibility yet, because original equipment manufacturers haven’t moved that fast across all traditional areas and need to understand the value of AI at the edge. That will be the second wave in 2025 to 2030,” Kaul predicted.

Chip manufacturers are engaged in a heated arms race to market AI acceleration modules for edge devices. Established companies such as microprocessor titan Intel and graphic processor leader NVIDIA face challenges from new competitors, such as well-funded tech giants Google, Microsoft and Amazon, and emerging companies such as Blaize and  Hailo Technologies. Some 30 companies were engaged in developing AI acceleration chip technology for edge applications at the beginning of the year, likely heading toward cutthroat competition. 

“I wouldn’t want to be one of those companies,” said Simon Crosby, Chief Technology Officer at Swim, a developer of software for processing streaming data. “In the edge world, ultimately acceleration parts have to be used in a vertically integrated solution by somebody who is going to take a hardware-based solution to market. Customers don’t care about the innards.”

Pleas click the LINK to view the original article.