We’ve all seen the numbers around the Internet of Things (IoT). Experts predict that by 2020, the IoT will reach 26 billion units and hundreds of billions in revenue. While it’s safe to say the IoT will be transformative for businesses, all of the possibilities opened up by these new connected devices will also bring new challenges to overcome.
For the network administrator of the future, the rising complexity brought on by more than 200 billion connected 'things' creates a whole new set of challenges. Forget about BYOD (Bring Your Own Device) and start thinking about BYOT (Bring Your Own Thing) -- and 'thing' could be everything from a coffee machine, to wearables, to cars.
Monitoring these devices is critical in order to guarantee a constant flow of reliable data. For instance, one common application for the IoT is to use wearable technology for extended healthcare purposes. Devices can monitor a patient’s pulse or heart rate and if there’s a sudden drop, an ambulance could automatically be informed, find the patient via GPS signal and hopefully save his life. But if the software crashes, the device gets disconnected, or is simply turned off, the entire process is disrupted.
All of these connected devices need sensors, networks, back-end infrastructure and analytics software to make them useful. So the question is, who monitors the monitor?
One of the biggest challenges brought on by the IoT will be integrating a very heterogeneous group of devices into an already existing network structure, particularly for companies in industries with a huge scope of possible 'things' to integrate. What happens to the data that gets picked up off the machines? It has to be added to the central IT system in order to enable further processing, useful display and a basis for maintenance workers to take action. Monitoring things, whether a health device or complex industrial machine, isn’t so different from monitoring network devices -- what matters is getting relevant data that can be analysed and put to a purpose.
IT is an area that never stops evolving and network monitoring is a big part of this development. When the concept was first introduced, the technology was mostly used to monitor physical IT devices like routers or switches (Monitoring 1.0). Then, as the virtualisation of networks became more prevalent, new concepts and functionalities had to be found in order to gather and process new kinds of relevant data (Monitoring 2.0). The next logical step was to run applications in the cloud and even further extend the virtualisation. To enable users of SaaS (Software as a Service) solutions and other cloud applications’ constant access to their productive environment, the connection to the cloud has to be closely monitored (Monitoring 3.0).
Besides the necessity to continue monitoring all devices, virtual machines and cloud based applications, the Internet of Things trend launches a new era in network monitoring: Monitoring 4.0. This is because with every new thing connected to the network, the amount of data that can and should be monitored is constantly growing.
Due to the heterogeneous nature of 'things' and applications -- many of which we probably can’t even think of today -- it will be difficult to have an out-of-the-box solution that covers every possible scenario. What is interesting is that we’re already seeing IT professionals ride this evolution and adapt to the developments. By utilising customisable sensors, IT administrators are currently monitoring everything from office buildings to swimming pools -- not all being network devices, but 'things'.
The goal is to create intelligent networks along the entire value chain, which can control each other autonomously. Although we might stand at the beginning of this revolution, it’s important to start planning for the future now. Sensible integration with the existing IT infrastructure should not be taken lightly.
Andrew Timms is Paessler AG channel manager, Australian & New Zealand.