In: Computer Science
11. Write an application case study with network design diagram including the following topics: [10 Marks] - IoT -
Bluetooth - Fog Computing - Redundant - Resilient and measures of resilience - Troubleshooting with half split and move method Highlight in bold each of the above keywords.
Internet Of Things(IOT)
The Internet of Things (IoT) is about interconnecting embedded systems, bringing together two evolving technologies: wireless connectivity and sensors. These connected embedded systems are independent microcontroller-based computers that use sensors to collect data.The Internet of Things (IoT) refers to a system of interrelated, internet-connected objects that are able to collect and transfer data over a wireless network without human intervention.These IoT systems are networked together usually by a wireless protocol such as WiFi, Bluetooth, 802.11.4, or a custom communication system. The networking protocol is selected based on the distribution of nodes and the amount of data to be collected.
A thing in the internet of things can be a person with a heart monitor implant, a farm animal with a biochip transponder, an automobile that has built-in sensors to alert the driver when tire pressure is low or any other natural or man-made object that can be assigned an Internet Protocol (IP) address and is able to transfer data over a network.
IoT Sensor Node Block Diagram
Internet of Things is a combination of the following :
1. Sensors and Actuators:- Sensors measure physical quantities such as temperature, sound, moisture, vibrations, etc and convert this physical quantity into an electrical quantity. These Converted signals pass to the system and then system acts accordingly.
2. Connectivity:-The received signals are to be uploaded on the network using different communication media such as Wi-Fi, Bluetooth or BLE, LoRaWAN, LTE, and many more.
3. People and Process:-People and Process are an important part of IoT. Networked inputs are then combined into a bidirectional system that integrates data, people and processes for better decision making.
User can access its result on his mobile/web application.
4. IoT Gateways & frameworks:- As the name rightly explains, it is a gateway to the internet for all the things/devices that we want to interact with. Gateways act as a carrier between the internal network of sensor nodes with the external Internet or World Wide Web.
5. Cloud server:- The data transmitted through the gateway is stored & processed securely within the cloud server i.e. in data centres. This processed data is then used to perform intelligent actions that make all our devices Smart Devices. In the cloud, all analytics and decision making happen considering user comfort.
Bluetooth
A Bluetooth technology is a high speed low powered wireless technology link that is designed to connect phones or other portable equipment together. It is a specification (IEEE 802.15.1) for the use of low power radio communications to link phones, computers and other network devices over short distance without wires. Wireless signals transmitted with Bluetooth cover short distances, typically up to 30 feet (10 meters).
It is achieved by embedded low cost transceivers into the devices. It supports on the frequency band of 2.45GHz and can support upto 721KBps along with three voice channels. This frequency band has been set aside by international agreement for the use of industrial, scientific and medical devices (ISM).rd-compatible with 1.0 devices.Bluetooth can connect up to “eight devices” simultaneously and each device offers a unique 48 bit address from the IEEE 802 standard with the connections being made point to point or multipoint.
Bluetooth Network consists of a Personal Area Network or a piconet which contains a minimum of 2 to maximum of 8 bluetooth peer devices- Usually a single master and upto 7 slaves. A master is the device which initiates communication with other devices. The master device governs the communications link and traffic between itself and the slave devices associated with it. A slave device is the device that responds to the master device. Slave devices are required to synchronize their transmit/receive timing with that of the masters. In addition, transmissions by slave devices are governed by the master device (i.e., the master device dictates when a slave device may transmit). Specifically, a slave may only begin its transmissions in a time slot immediately following the time slot in which it was addressed by the master, or in a time slot explicitly reserved for use by the slave device.
The frequency hopping sequence is defined by the Bluetooth device address (BD_ADDR) of the master device. The master device first sends a radio signal asking for response from the particular slave devices within the range of addresses. The slaves respond and synchronize their hop frequency as well as clock with that of the master device.
The core specification consists of 5 layers:
The 1st three layers comprise the Bluetooth module whereas the last two layers make up the host. The interfacing between these two logical groups is called Host Controller Interface.
Fog Computation
Fog computing is an extension of cloud computing which deploys data storage, computing and communications resources, control and management data analytics closer to the endpoints. It is especially important for the Internet of Things (IoT) continuum, where low latency and low cost are needed.
Fog computing architecture is the arrangement of physical and logical network elements, hardware, and software to implement a useful IoT network. Key architectural decisions involve the physical and geographical positioning of fog nodes, their arrangement in a hierarchy, the numbers, types, topology, protocols, and data bandwidth capacities of the links between fog nodes, things, and the cloud, the hartware and software design of individual fog nodes, and how a complete IoT network is orchestrated and managed. In order to optimize the architecture of a fog network, one must first understand the critical requirements of the general use cases that will take advantage of fog and specific software application(s) that will run on them. Then these requirements must be mapped onto a partitioned network of appropriately designed fog nodes. Certain clusters of requirements are difficult to implement on networks built with heavy reliance on the cloud (intelligence at the top) or intelligent things (intelligence at the bottom), and are particularly influential in the decision to move to fog-based architectures.
From a systematic perspective, fog networks provide a distributed computing system with a hierarchical topology. Fog networks aim at meeting stringent latency requirements, reducing power consumption of end devices, providing real-time data processing and control with localized computing resources, and decreasing the burden of backhaul traffic to centralized data centers. And of course, excellent network security, reliability and availability must be inherent in fog networks.
6 Layers of Fog Computing Architecture
Physical and Virtualization Layer:-There are two types of nodes present in these layers; these are physical and virtual nodes.Nodes are initially responsible for the collection of data at multiple locations.Nodes are equipped with sensor technology, which helps them in collecting the data from their surroundings.The data thus collected is sent for processing with the help of gateways.
Monitoring Layer:-This layer is usually involved with the tasks of monitoring various nodes.It involves monitoring the time of work a node is taking, the temperature, and the battery life of the device.
Pre-processing Layer:-This layer is involved in the analysis part.The main job of this layer is to bring out meaningful data from the information collected from multiple sources.Data that is collected by nodes is thoroughly hecked for errors and impurities.This layer makes sure that the data is thoroughly checked before being used for other purposes.
Temporary Storage:-Mainly used for storing the data before it is being forwarded to the cloud.Data is stored with the help of virtual storage.
Security Layer:-As the name of the layer suggests, this layer is entirely for the security of the data.This layer takes care of the privacy of the data, which includes encryption and decryption of data.
Transport Layer:-The primary function of this layer is to upload the data to the cloud for permanent storage.The data is not uploaded completely, preferably only a portion of it is uploaded for efficiency purposes.To ensure that the process is efficiently completed, the use of lightweight communication protocols is used.
Essential Characteristics of Fog computing
Low Latency Because of the connectivity of the fog nodes with efficient and smart end devices, the analysis and generation of data by these devices are quicker.This results in lower latency of data.
Heterogeneity Fog computing is quite a heterogeneous infrastructure as it can collect data from multiple sources.Being a virtualized platform providing end-user storage and other services like networking, it acts as a bridge between end devices and traditional cloud computing centers.
Mobility Many fog computing applications have to communicate with mobile devices.This makes them conducive to mobility techniques like LISP (Locator/ID Separation Protocol).The main task of LISP is to decouple the location and identity.
Scalability and Agility of Fog-Node Clusters Being adaptive in nature at the cluster level, it is able to support the majority of functions like elastic compute, data -load changes, and network variations.
The dominance of Wireless Access Although fog computing is widely used in wired environments.But the wireless sensors spread on vast areas associated with IoT devices demand different requirements related to analytics.For this also, fog computing is suitable for wireless IoT access networks.
Redundant - Resilient and measures of resilience
Resilience and redundancy offer ways to yield a dependable system—known as system dependability. Both resilience and redundancy are critical for the design and deployment of computer networks, data centers, and IT infrastructure. Despite the critical nature of both, resiliency and redundancy are not the same thing.
Here, we’ll explain system dependability, the differences in resiliency and redundancy, and how they both contribute to your system’s dependability.
What is system dependability?
System dependability is defined as “the trustworthiness of a computing system which allows reliance to be justifiably placed on the service it delivers”. A computing system and its components must be strong enough to consistently deliver its service to end users. A key attribute of dependability is the reliability of the system, so that the rendered service is available throughout its operating duration at an acceptable performance level. Both resilience and redundancy help achieve a system’s dependability, but they are not interchangeable strategies.
Resiliency and redundancy
In the domain of computer networking, resilience and redundancy establish fault tolerance within a system, allowing it to remain functional despite the occurrence of issues such as power outage, cyber-attacks, system overload, and other causes of downtime. In this context, the terms can be defined as follows:
With these definitions, redundancy and resilience are not interchangeable but complementary: Redundancy is required to enhance a system’s resilience, and resilience of individual system components ensures that redundant components can recover to functional state following a fault occurrence. Redundancy is an absolute measure of the additional components supporting system resilience, whereas resiliency is a relative (and continuous) measure of the impact of fault on the system operation.
How to build redundancy
Functional capabilities of computer system components encompass hardware, software, information, and time-based performance. Therefore, introducing redundancy into a system with respect to these functional categories can help break the fault-error-failure chain.When a fault occurs, it produces a performance error, resulting in the system’s failure to deliver a dependable service. Introducing redundancy in the form of additional hardware components, multiple software features, error checks for data, and consistency in information or system performance with respect to time reduce the possibility of a fault leading to system failure.
Redundancy can take various forms and configurations, including:
A variety of redundancy configurations can be used for the design and deployment of IT infrastructure with a range of availability requirements, cost, and complexity considerations.
How to build resiliency
Redundancy is a measure to enhance resilience, allowing for a proactive or reactive response to impactful changes facing the system. Resilience requirements can be diverse and evolving, especially in scalable environments where a fault at one system element can cascade across the system. Other measures of resilience can include the system’s ability to:
Resilience may be measured in terms of the availability of a system at any given instance, as a continuous function of the frequency or delays in occurrence of the faults and the speed of recovery from a fault state. Therefore, important metrics for resilience include the system’s mean time to failure (MTTF) and the mean time to recovery (MTTR).
The choice for configuration of redundancy maps directly to the MTTF and MTTR of a system. An optimal choice for redundancy configuration will aim to achieve the highest MTTF and lowest MTTR while reducing the cost and complexity of the redundant system. In other words, redundancy configuration must be chosen such that the system remains available for the prolonged time period on average (MTTF), but when it does fail, the average recovery time to optimal state (MTTR) should be the lowest, without incurring unacceptable cost and complexity of the system. Other useful metrics include the mean time between failure (MTBF), which is the mathematical sum of MTTF and MTTR.
The choice for resilience and redundancy is based on the requirements on dependability of the system, with respect to the cost, complexity, and practical impediments involved. In terms of goals and objectives, resilience is often considered as an important requirement as it may be used to describe dependability attributes such as availability, reliability, and performance. However, redundancy in itself is never considered as a goal, since the requirements associated with dependability should be fulfilled with a redundancy configuration incurring the lowest possible cost and complexity.
Troubleshooting with half split and move method
1-Split-Half Troubleshooting:- This technique allows you to locate defects with a few measurements. Essentially the technique involves splitting the circuit path into halves and troubleshooting from the middle of the first half then the second half and so on. The idea is here if you have the circuit and there's many potential test points, you measure at the halfway point. If the signal is good at this point, and you have to know what is a good signal and what is a bad signal, but if you measure here and you have a good reading you know, the problem is probably over here. If the signal is bad, then you would go back and look at this area and then again, you do more tests. This is essentially, what we had talked about, I believe it was in chapter one, where we talked about the idea of divide and conquer.
This part of the troubleshooting process can be the most difficult and the most time-consuming. That's why a logical and methodical plan is so important. We've found that the following order has been effective:
User errors. Check for user errors in the course of gathering information, duplicating the issue, and trying quick fixes. But keep in mind the possibility of incorrectly set switches or preferences, incompatible equipment, and incorrect assumptions on the user's part; take nothing for granted.
Software-related issues. Software that is unusable or that doesn't work with other software, viruses, extension conflicts, duplicate System folders, and other software issues can cause symptoms that may look like hardware issues. But replacing hardware won't solve these problems, and it wastes time and money, so always check for software issues before replacing any hardware. MacTest Pro system software tests can detect and repair many software issues of this type. Remember that you must check applications and the Mac OS.
Software viruses. As you most likely know, a virus is a program that replicates itself and often modifies other programs. When a virus gets into system software, the computer may not start up, the system may stop responding, or the software may work incorrectly. (It may be helpful to define this for a customer who really isn't sure why a virus could be such a big issue.) Although Macintosh computers are less likely to become infected with viruses than computers running other operating systems, it is still possible to get a virus on a Mac. Email attachments and other files downloaded from the Internet are common sources of virus infection.
To check for a virus, ask customers these questions:You can find up-to-date virus information on the Internet at a variety of locations. Third-party virus utilities such as McAfee Virex can check systems and remove viruses from them.If you do detect a virus, make sure you find the original source file and delete it. Then reinstall all affected system and application software, and dispose of any unusable data files.
Hardware issues. When you are convinced that user error, a virus, or other software has not caused the issue, hardware is what is left. Here are some tips:
2-Move-the-Problem Troubleshooting Method:-
Move the problem is a very elementary troubleshooting technique that can be used for problem isolation: You physically swap components and observe whether the problem stays in place, moves with the component, or disappears entirely. Figure 1 shows two PCs and three laptops connected to a LAN switch, among which laptop B has connectivity problems. Assuming that hardware failure is suspected, you must discover if the problem is on the switch, the cable, or the laptop. One approach is to start gathering data by checking the settings on the laptop with problems, examining the settings on the switch, comparing the settings of all the laptops, and the switch ports, and so on. However, you might not have the required administrative passwords for the PCs, laptops, and the switch. The only data that you can gather is the status of the link LEDs on the switch and the laptops and PCs. What you can do is obviously limited. A common way to at least isolate the problem (if it is not solved outright) is cable or port swapping. Swap the cable between a working device and laptop B (the one that is having problems). Move the laptop from one port to another using a cable that you know for sure is good. Based on these simple moves, you can isolate whether the problem is cable, switch, or laptop related.
Figure 1 Move the Problem: Laptop B Is Having Network Problems
Just by executing simple tests in a methodical way, the move-the-problem approach enables you to isolate the problem even if the information that you can gather is minimal. Even if you do not solve the problem, you have scoped it to a single element, and you can now focus further troubleshooting on that element. Note that in the previous example if you determine that the problem is cable related, it is unnecessary to obtain the administrative password for the switch, PCs, and laptops. The drawbacks of this method is that you are isolating the problem to only a limited set of physical elements and not gaining any real insight in what is happening, because you are gathering only very limited indirect information. This method assumes that the problem is with a single component. If the problem lies within multiple devices, you might not be able to isolate the problem correctly.
Using this approach alone is not likely to be enough to solve the problem, but if following any of the other methods indicates a potential hardware issue between the consultant's PC and the access switch, this method might come into play. However, merely as a first step, you could consider swapping the cable and the jack connected to the consultant's laptop and the controller's PC, in turn, to see whether the problem is cable, PC, or switch related.