18

The “Internet of Everything” Thing… Treat with Skepticism

Since around 2011 Cisco Systems and Gartner have been laying claim to a term “Internet of Everything” (IoE), fashioned as spin-off or upgrade to the “Internet of Things” (IoT). I would argue that we should be taking the IoT very seriously, and that the IoE is just marketing collateral with no real substance beyond driving billable consulting engagements to Gartner and Cisco. Its a bold claim, but hear me out…

The Internet of Things was first coined as a term in 1999 by Kevin Ashton at the time he was a Proctor & Gamble supply chain manager. In an RFID Journal article in 2009 he summarized one of its key points: “nearly all the data available on the Internet were first captured and created by human beings—by typing, pressing a record button, taking a digital picture or scanning a bar code”. Likewise, the user of that data was almost always a human. So our current internet is really an “Internet of Humans”, and Ashton predicted that was going to change in the near future: into a situation where most data was captured or consumed by non-human “things”. The “things” may be software, firmware on hardware, or something else entirely… just not human.

And so it has… in last five years since the RFID Journal article was published the internet has grown substantially: data volume has tripled and mobile data volume has gone up about 15x. A large part of this growth is attributed to automatic capture or consumption by non-humans, i.e. the Internet of Things. Here is an example from this week… when I checked Google News I saw an article in the Los Angeles Times about an earthquake that occurred on Monday. That article was written by a journalism algorithm designed to take data feeds (emails) from weather and geological sources. The full flow of data had been: (1) earthquake, (2) detection by unmanned seismographic sensors, (3) sensor data integrated in to an estimate of event magnitude, location, duration, and type, (4) automated email alert broadcast by the U.S. Geological Survey website, (5) LA Times robo-journalism software writes an article (6) The article is published to their website (7) a web-crawler at Google picks up the new story, assesses its relevance, and places it as a highlighted news story, (8) I read the story…

The flow above took 3.5 minutes and six of the seven steps after the earthquake were “thing to thing” communication. Fifteen years ago, it would have been one or two. And, critically, in the future it will likely be all steps that are “thing to thing”, most of the time. Many of the activities that should get kicked off when an earthquake occurs (insurance claim prep, humanitarian needs assessments, civil order assessments) can and will be done by machines rather than people. Does that mean humans are leaving the internet? Not at all, but we are going to be a minority user of it. Consider that fiber-optic and mobile communication systems were designed for voice traffic, i.e. phone calls. But that today 99% of their capacity is dedicated to data traffic, and voice traffic is so marginal that its often given away for free.

18

This is precisely where the “Internet of Everything” concept from Gartner and Cisco Systems breaks with reality. In the IoE worldview, the future of the internet is predicated on four pillars: Things, People, Data, and Processes. The “pillar” metaphor is interesting, since in architecture a set of pillars should be the same size to allow distribution of load across them. The frequency and magnitude of “thing-only” internet usage will far surpass the “human-only”, or even “human-included” use cases. This is just common sense, because the number of connected “things” will number in the tens or hundreds of billions, whereas the number of internet-connected humans will not exceed ten billion because that’s the expected upper limit of the human population this century. Even if every human was on the internet, including those too young to use it and too old to try, and including those with hindering disabilities or without economic means to access, we still have a hard usage ceiling. Likewise our cognitive nature as humans limits the amount of interactions we can have: when I am reading an article in the LA Times about earthquakes I am not available for other interactions. Together, our narrow multitasking and limited numbers mean we must eventually be the minority users of the internet.

In summary, I’m very interested and engaged with the Internet of Things trend, but see the Internet of Everything as a poorly considered spin-off attempt. Whereas the IoT has broad demographic and engineering support and is adopted widely, the IoE seems more like an attempt to turn the IoT in to a product or consulting opportunity. In particular, the four pillar model’s inclusion of “process” as the central aspect probably has more to do with Gartner or Cisco selling billable consulting services for process design than it does about how the IoT will actually evolve. To be blunt: the IoE concept is marketing collateral.

For more information on the real stuff, the Internet of Things, checkout this wired magazine article and of course this Wikipedia page. For those in supply chain management (like me), you may be interested to know that the IoT was coined by the same person who organized the RFID standards at MIT. Most of the early IoT adoption is in our field, and was expected to be from the beginning. Here is one roadmap view for the IoT, note the first step in the process.

19

Leave a Reply