Home
News
Products
Corporate
Contact
 
Friday, January 18, 2019

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

Nature of IoT progress on


Friday, January 11, 2019

The Internet of Things is becoming more difficult to define and utilize for an effective business strategy. While an increasing number devices send data to the cloud or some local server, so much data is being generated and moved around that new strategies are being developed to rethink what needs to be processed where.

Back in 2013, when the IoT concept really began taking off, connectivity to the Internet was considered the ultimate goal because the biggest compute resources were still in the data center. Today, compute resources are becoming more distributed and processing is becoming more nuanced. In fact, almost all of the early major proponents of the IoT, such as Cisco, Arm, Samsung and Philips, have shifted their IoT focus to data management, processing, and security.

The big issues now are how to connect different devices to each other, and what to do with all the data generated by tens of billions of sensors and devices. That includes what data should be processed where, how much really needs to be moved, how to move it more quickly, and how to protect data in place and in transit.

“When the IoT first started, a lot of it was based on things like temperature sensors,” said Geoff Tate, CEO of Flex Logix. “There’s not a lot of data coming out of temperature sensors. But now we’re adding in things like video, where you have cameras doing surveillance. Processing now has to be done closer to the camera—either in the camera or at some edge server. There’s also a cost for the radio chip to send everything, and not everywhere has good enough connectivity. On top of that, networks were designed for transmitting large amounts of data, not uploading it.”

This adds a whole different slant to the IoT concept, because it means more data now needs to be processed in place or nearby.

“Even with 5G, there is a huge amount of data being produced,” said Mike Fitton, senior director of strategic planning at Achronix. “What’s changing is that processing capability will move from the data center and flow toward the edge. Processing will need to happen everywhere. You’re going to see a shift in relative ratios along those lines.”

Alongside a growing volume of data is an increasing value to the data. And the more data that is collected from more sources, the greater its value. But that value doesn’t necessarily rely on being shipped to the cloud where it can be mined in the context of other data. Some of it can be used by other devices or even by infrastructure.

These shifts are behind last month’s merger of the Industrial Internet Consortium (IIC) and the OpenFog Consortium, which combined the networking standards and architecture development of the OpenFog Consortium with the IIC’s emphasis on testing product integration. The IIC is concerned with devices spread around the network, while the OpenFog Consortium is concerned with the networking, storage and processing infrastructure necessary to efficiently connect everything between the edge and the cloud. The merger of the two groups reflects the evolution of technology rather than a purposeful direction by either one, said IIC President Bill Hoffman.

“Over three years of working on fog computing and trying to address bandwidth issues and the trend of AI moving toward the edge, it became clear we shared members and interests and problems that we’d been trying to solve,” Hoffman said. “We reached a point that we could streamline the way we drove technology deliverables and improve the overall market to consolidate discussion of fog computing and the IoT in one place.”

IIoT and the edge

Yet it remains to be seen how effective this approach becomes. The electronics industry is littered with devices that either never caught on, used far too much power, had too little performance, missed the market entirely, or which got lost in a sea of similar products.

And while most IoT watchers point to the IIoT as the place where the real benefits are, successful IoT strategies are harder to develop than initially thought. A May 2017 report by Cisco showed that nearly 60% of IoT initiatives never made it beyond the proof of concept phase, and one-third of completed projects were failures.

“You see some industries — mining definitely has a need, and manufacturing and a few others — but overall, across most industries we don’t see much engagement,” said Matt Vasey, OpenFog chairman and president, and also a director of AI and IoT business development at Microsoft. “A lot of industries just haven’t picked up in full force.”

The reasons are varied, and sometimes specific to a company or industry segment. “It’s a great amount of transformation to really take advantage of edge and IoT, so there’s a lot of resistance from production,” Hoffman said. “There is also not a lot of metrics out to demonstrate the value. If you can save 50% on the cost of a process, that’s clear, but most of the benefits are less clear than that.”

This doesn’t diminish the value of the data collected by sensors in smart devices, however. “If you can do something as simple as decrease fuel consumption by 1% by utilizing that data, that’s a huge impact,” said Achronix’s Fitton. “This is the kind of stuff that’s happening in the industrial IoT, and that’s having more impact than the classic definition of the IoT, which was always dismissed as something of a vague term.”

The key is being able to tailor a solution that works, in a market where it makes sense. And even then, there is a confusing array of tools, platforms and strategies. In June of 2017, IoT Analytics counted more than 450 software packages being offered either as operating systems or system software connected networks of IIoT devices. Even if the tally was right, 450 options is chaos, not progress, according to a May, 2017 McKinsey report. “If there are [even] 100 IoT platforms, then there is no platform, just aspirants,” the report concluded.

IEEE and a pair of industry groups formed to make sense of the growing edge computing trend may help reduce the chaos in this segment. Last June, the IEEE Standards Association, at the instigation of the IEEE Communications Society, adopted the OpenFog Reference Architecture as an official standard, now called IEEE 1394. That standard looks at east-west flow of data, rather than just north-south, which is a significant departure from how the IoT concept originally was laid out by leading proponents.

“We have seen a continuum of intelligence extend from the cloud right down to the edge,” said Vasey. “We are trying to make sure we have the right tools for the job—near the edge where devices might need storage and servers, or just mid-tier gateways, to near-edge data centers with much richer resources—and extending right up to the cloud in a coherent way. We’re no longer dealing with just the cloud or just the edge or just the middle tier. It’s more of a continuum. And we get feedback from customers saying that model makes a lot of sense, putting compute and storage and networking resources anywhere up and down that continuum where you need it.”

Thinking differently

So what does this mean for the IoT as a whole? The key may be thinking about the overall concept less from the standpoint of the initial IoT vision and more from the flow of the data.

“There’s a need for a lot more compute at the source of the data,” said Susheel Tadikonda, vice president in Synopsys’ verification group. “The volume of data is taking so much power and bandwidth, and latency is so much of an issue, that you need to consume the data at the source. If you can make the sensors intelligent, then you can look through the data to find what’s important. What we’re seeing is that everything is going together in bundles of data.”

Tadikonda noted that while the data isn’t new, the emphasis on using it more effectively is definitely new. “No one cared about it in the past. The underlying data was very powerful, but no one had any idea of what it would spawn. If you look at Uber, no one knew something like that would happen until 4G was there. And data centers used to be their own buildings. Now you can have your own cloud. But the data also has to go from somewhere to somewhere else, and bandwidth still has to increase because even if you do more processing locally, the number of connected devices is growing so fast that they’re producing more data in total.”

This, in turn, is affecting how systems are being designed, from the data center to the networks and all the way down to the various components that make up chips.

“We are moving into a data-driven economy,” said Lip-Bu Tan, chairman and CEO of Cadence. “It’s all about data. A lot sensors collect data. Depending upon the application, some will move into the intelligent edge. When you process the data there, you want to do that with very low power. That can drive productivity and efficiency. The other piece of this is the hyperscale data center. You need infrastructure there to drive the applications.”

Security issues

Alongside of all of this, data has to be protected.

“Now the question is how we evolve that so that we can build systems that are connected but secure, because there are more and more connected systems,” said Helena Handschuh, a fellow in Rambus’ Cryptography Research Division. “We need more end-point security. If you look at the PC industry and networking, there are ways to detect security issues and then try to mitigate them after that. But no matter how good your security, eventually something will go wrong.”

Security was always one of the top concerns cited when it came to the IoT. Despite the warnings, the chip industry is just beginning to take a serious look at how to automate some of those checks. One of the drivers of that shift was the cost of fixing hardware—measured both in time spent on mitigating the problem and in performance inside of data centers—after Google Project Zero exposed hardware vulnerabilities related to speculative execution and branch prediction. Prior to that, a botnet attack based on the Mirai virus, brought down some of the largest Internet sites.

“Security of a chip is something like compatibility,” said Wally Rhines, CEO Emeritus at Mentor, a Siemens Business. “You can always show that something is incompatible. You can never guarantee that something is compatible. The same is true here. You can always show there is a vulnerability, but you can never guarantee there are other vulnerabilities. It becomes one of an asymptotic approach, where you’ve verified so much that if there is a problem it’s going to be really rare and hard to get to. Beyond just the simulation, the insertion of features into a chip that can do analysis to minimize the possibility of a buried Trojan is, in fact, a key part. The other key part is on the incoming IP. How do you verify the IP you’ve got does not have Trojans in it and who do you trust. Do you trust your IP vendor? Do you trust some kind of simulation that goes through rigorous verification of that IP?”

This is becoming a huge issue across the industry, because IP tracking is getting more convoluted as designs become more heterogeneous to deal with the increase in different types of data.

“There are a lot of people getting into whole system architectures,” said Ranjit Adhikary, vice president of marketing at ClioSoft. “There are companies with a lot of PDKs, but you aren’t sure which PDKs are being used and you don’t know where the documentation is. This is a big mess. You don’t even know if IP went to tapeout in silicon or how much was paid for it.”

That, in turn, has an impact on security. “There is no perfect solution when it comes to security,” Rhines noted. “Nothing gives you 100% certainty. But there are tests you can run. You can put intelligent features into the chip that look at the data flow and look for unusual sequences and put up a flag when an unusual sequence is executed, and possibly block the execution when a suspicious circumstance occurs. Products we’ve tested in the past merely flag unusual things going on in the system, and it’s up to them to look into it.”

By: DocMemory
Copyright © 2018 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2018 CST, Inc. All Rights Reserved