Question

In: Computer Science

11. Write an application case study with network design diagram including the following topics: [10 Marks]...

11. Write an application case study with network design diagram including the following topics: [10 Marks] - IoT -

Bluetooth - Fog Computing - Redundant - Resilient and measures of resilience - Troubleshooting with half split and move method Highlight in bold each of the above keywords.

Solutions

Expert Solution

Internet Of Things(IOT)

The Internet of Things (IoT) is about interconnecting embedded systems, bringing together two evolving technologies: wireless connectivity and sensors. These connected embedded systems are independent microcontroller-based computers that use sensors to collect data.The Internet of Things (IoT) refers to a system of interrelated, internet-connected objects that are able to collect and transfer data over a wireless network without human intervention.These IoT systems are networked together usually by a wireless protocol such as WiFi, Bluetooth, 802.11.4, or a custom communication system. The networking protocol is selected based on the distribution of nodes and the amount of data to be collected.

A thing in the internet of things can be a person with a heart monitor implant, a farm animal with a biochip transponder, an automobile that has built-in sensors to alert the driver when tire pressure is low or any other natural or man-made object that can be assigned an Internet Protocol (IP) address and is able to transfer data over a network.

IoT Sensor Node Block Diagram

Internet of Things is a combination of the following :

1. Sensors and Actuators:- Sensors measure physical quantities such as temperature, sound, moisture, vibrations, etc and convert this physical quantity into an electrical quantity. These Converted signals pass to the system and then system acts accordingly.

2. Connectivity:-The received signals are to be uploaded on the network using different communication media such as Wi-Fi, Bluetooth or BLE, LoRaWAN, LTE, and many more.

3. People and Process:-People and Process are an important part of IoT. Networked inputs are then combined into a bidirectional system that integrates data, people and processes for better decision making.

User can access its result on his mobile/web application.

4. IoT Gateways & frameworks:- As the name rightly explains, it is a gateway to the internet for all the things/devices that we want to interact with. Gateways act as a carrier between the internal network of sensor nodes with the external Internet or World Wide Web.

5. Cloud server:- The data transmitted through the gateway is stored & processed securely within the cloud server i.e. in data centres. This processed data is then used to perform intelligent actions that make all our devices Smart Devices. In the cloud, all analytics and decision making happen considering user comfort.

Bluetooth

A Bluetooth technology is a high speed low powered wireless technology link that is designed to connect phones or other portable equipment together. It is a specification (IEEE 802.15.1) for the use of low power radio communications to link phones, computers and other network devices over short distance without wires. Wireless signals transmitted with Bluetooth cover short distances, typically up to 30 feet (10 meters).

It is achieved by embedded low cost transceivers into the devices. It supports on the frequency band of 2.45GHz and can support upto 721KBps along with three voice channels. This frequency band has been set aside by international agreement for the use of industrial, scientific and medical devices (ISM).rd-compatible with 1.0 devices.Bluetooth can connect up to “eight devices” simultaneously and each device offers a unique 48 bit address from the IEEE 802 standard with the connections being made point to point or multipoint.

Bluetooth Network consists of a Personal Area Network or a piconet which contains a minimum of 2 to maximum of 8 bluetooth peer devices- Usually a single master and upto 7 slaves. A master is the device which initiates communication with other devices. The master device governs the communications link and traffic between itself and the slave devices associated with it. A slave device is the device that responds to the master device. Slave devices are required to synchronize their transmit/receive timing with that of the masters. In addition, transmissions by slave devices are governed by the master device (i.e., the master device dictates when a slave device may transmit). Specifically, a slave may only begin its transmissions in a time slot immediately following the time slot in which it was addressed by the master, or in a time slot explicitly reserved for use by the slave device.

  

The frequency hopping sequence is defined by the Bluetooth device address (BD_ADDR) of the master device. The master device first sends a radio signal asking for response from the particular slave devices within the range of addresses. The slaves respond and synchronize their hop frequency as well as clock with that of the master device.

The core specification consists of 5 layers:

  • Radio: Radio specifies the requirements for radio transmission – including frequency, modulation, and power characteristics – for a Bluetooth transceiver.
  • Baseband Layer: It defines physical and logical channels and link types (voice or data); specifies various packet formats, transmit and receive timing, channel control, and the mechanism for frequency hopping (hop selection) and device addressing.It specifies point to point or point to multipoint links. The length of a packet can range from 68 bits (shortened access code) to a maximum of 3071 bits.
  • LMP- Link Manager Protocol (LMP): defines the procedures for link set up and ongoing link management.
  • Logical Link Control and Adaptation Protocol (L2CAP): is responsible for adapting upper-layer protocols to the baseband layer.
  • Service Discovery Protocol (SDP): – allows a Bluetooth device to query other Bluetooth devices for device information, services provided, and the characteristics of those services.

The 1st three layers comprise the Bluetooth module whereas the last two layers make up the host. The interfacing between these two logical groups is called Host Controller Interface.

Fog Computation

Fog computing is an extension of cloud computing which deploys data storage, computing and communications resources, control and management data analytics closer to the endpoints. It is especially important for the Internet of Things (IoT) continuum, where low latency and low cost are needed.

Fog computing architecture is the arrangement of physical and logical network elements, hardware, and software to implement a useful IoT network. Key architectural decisions involve the physical and geographical positioning of fog nodes, their arrangement in a hierarchy, the numbers, types, topology, protocols, and data bandwidth capacities of the links between fog nodes, things, and the cloud, the hartware and software design of individual fog nodes, and how a complete IoT network is orchestrated and managed. In order to optimize the architecture of a fog network, one must first understand the critical requirements of the general use cases that will take advantage of fog and specific software application(s) that will run on them. Then these requirements must be mapped onto a partitioned network of appropriately designed fog nodes. Certain clusters of requirements are difficult to implement on networks built with heavy reliance on the cloud (intelligence at the top) or intelligent things (intelligence at the bottom), and are particularly influential in the decision to move to fog-based architectures.

From a systematic perspective, fog networks provide a distributed computing system with a hierarchical topology. Fog networks aim at meeting stringent latency requirements, reducing power consumption of end devices, providing real-time data processing and control with localized computing resources, and decreasing the burden of backhaul traffic to centralized data centers. And of course, excellent network security, reliability and availability must be inherent in fog networks.

6 Layers of Fog Computing Architecture

Physical and Virtualization Layer:-There are two types of nodes present in these layers; these are physical and virtual nodes.Nodes are initially responsible for the collection of data at multiple locations.Nodes are equipped with sensor technology, which helps them in collecting the data from their surroundings.The data thus collected is sent for processing with the help of gateways.

Monitoring Layer:-This layer is usually involved with the tasks of monitoring various nodes.It involves monitoring the time of work a node is taking, the temperature, and the battery life of the device.

Pre-processing Layer:-This layer is involved in the analysis part.The main job of this layer is to bring out meaningful data from the information collected from multiple sources.Data that is collected by nodes is thoroughly hecked for errors and impurities.This layer makes sure that the data is thoroughly checked before being used for other purposes.

Temporary Storage:-Mainly used for storing the data before it is being forwarded to the cloud.Data is stored with the help of virtual storage.

Security Layer:-As the name of the layer suggests, this layer is entirely for the security of the data.This layer takes care of the privacy of the data, which includes encryption and decryption of data.

Transport Layer:-The primary function of this layer is to upload the data to the cloud for permanent storage.The data is not uploaded completely, preferably only a portion of it is uploaded for efficiency purposes.To ensure that the process is efficiently completed, the use of lightweight communication protocols is used.

Essential Characteristics of Fog computing

Low Latency Because of the connectivity of the fog nodes with efficient and smart end devices, the analysis and generation of data by these devices are quicker.This results in lower latency of data.

Heterogeneity Fog computing is quite a heterogeneous infrastructure as it can collect data from multiple sources.Being a virtualized platform providing end-user storage and other services like networking, it acts as a bridge between end devices and traditional cloud computing centers.

Mobility Many fog computing applications have to communicate with mobile devices.This makes them conducive to mobility techniques like LISP (Locator/ID Separation Protocol).The main task of LISP is to decouple the location and identity.

Scalability and Agility of Fog-Node Clusters Being adaptive in nature at the cluster level, it is able to support the majority of functions like elastic compute, data -load changes, and network variations.

The dominance of Wireless Access Although fog computing is widely used in wired environments.But the wireless sensors spread on vast areas associated with IoT devices demand different requirements related to analytics.For this also, fog computing is suitable for wireless IoT access networks.

Redundant - Resilient and measures of resilience

Resilience and redundancy offer ways to yield a dependable system—known as system dependability. Both resilience and redundancy are critical for the design and deployment of computer networks, data centers, and IT infrastructure. Despite the critical nature of both, resiliency and redundancy are not the same thing.

Here, we’ll explain system dependability, the differences in resiliency and redundancy, and how they both contribute to your system’s dependability.

What is system dependability?

System dependability is defined as “the trustworthiness of a computing system which allows reliance to be justifiably placed on the service it delivers”. A computing system and its components must be strong enough to consistently deliver its service to end users. A key attribute of dependability is the reliability of the system, so that the rendered service is available throughout its operating duration at an acceptable performance level. Both resilience and redundancy help achieve a system’s dependability, but they are not interchangeable strategies.

Resiliency and redundancy

In the domain of computer networking, resilience and redundancy establish fault tolerance within a system, allowing it to remain functional despite the occurrence of issues such as power outage, cyber-attacks, system overload, and other causes of downtime. In this context, the terms can be defined as follows:

  • Redundancy is the intentional duplication of system components in order to increase a system’s dependability.
  • Resilience refers to a system’s ability to recover from a fault and maintain persistency of service dependability in the face of faults.

With these definitions, redundancy and resilience are not interchangeable but complementary: Redundancy is required to enhance a system’s resilience, and resilience of individual system components ensures that redundant components can recover to functional state following a fault occurrence. Redundancy is an absolute measure of the additional components supporting system resilience, whereas resiliency is a relative (and continuous) measure of the impact of fault on the system operation.

How to build redundancy

Functional capabilities of computer system components encompass hardware, software, information, and time-based performance. Therefore, introducing redundancy into a system with respect to these functional categories can help break the fault-error-failure chain.When a fault occurs, it produces a performance error, resulting in the system’s failure to deliver a dependable service. Introducing redundancy in the form of additional hardware components, multiple software features, error checks for data, and consistency in information or system performance with respect to time reduce the possibility of a fault leading to system failure.

Redundancy can take various forms and configurations, including:

  • Active redundancy. Components are actively participating in system operation and sharing workload distribution. For example, you may transmit data over two fiber optic cables in parallel instead of just one. If one cable breaks, the system remains functional. Because the redundant component is already activated and involved in system functionality, the system is strong enough to recover from a state of fault. However, actively redundant components may also share a single point of failure, in which case the additional system elements may not contribute to system performance. For instance, connections loosening due to heat or other external damages can cause both fiber optics cables to disconnect simultaneously.
  • Passive redundancy. Application of redundant components only following the fault state. The additional components are present but not actively involved in system functionality. Recovery may take time since the activation process and connection to the system of the redundant components is not instantaneous in real-world applications. For instance, with a cold backup system additional hard drives are available to store data, but these additional drives connect to perform the necessary storage operations only when the primary storage volume fails.
  • Homogeneous redundancy. Application of redundant components of the same type, such as using the same brand and model of the hardware in a redundant configuration. The additions may yield exponential improvement in system resilience. However, this also makes the system susceptible to a single cause of failure. If the hardware is vulnerable to a certain security exploit, then the redundant hardware will not offer tolerance to that exploit.
  • Diverse redundancy. Application of redundant components different to the primary component. This configuration reduces the possibility of a single cause of failure impacting system dependability. For instance, trains use a mix of electric, pneumatic, hydraulic, and air braking systems. If a primary braking system fails due to excessive wear and tear or malfunction, secondary and tertiary braking systems take over immediately to reduce the risk of collision. Generally, diverse redundancy is quite complex and expensive to maintain.

A variety of redundancy configurations can be used for the design and deployment of IT infrastructure with a range of availability requirements, cost, and complexity considerations.

How to build resiliency

Redundancy is a measure to enhance resilience, allowing for a proactive or reactive response to impactful changes facing the system. Resilience requirements can be diverse and evolving, especially in scalable environments where a fault at one system element can cascade across the system. Other measures of resilience can include the system’s ability to:

  • Forecast potential faults
  • Isolate impacted components
  • Protect against potential faults
  • Remove faults and recover from a fault state
  • Restore optimal system performance

Resilience may be measured in terms of the availability of a system at any given instance, as a continuous function of the frequency or delays in occurrence of the faults and the speed of recovery from a fault state. Therefore, important metrics for resilience include the system’s mean time to failure (MTTF) and the mean time to recovery (MTTR).

The choice for configuration of redundancy maps directly to the MTTF and MTTR of a system. An optimal choice for redundancy configuration will aim to achieve the highest MTTF and lowest MTTR while reducing the cost and complexity of the redundant system. In other words, redundancy configuration must be chosen such that the system remains available for the prolonged time period on average (MTTF), but when it does fail, the average recovery time to optimal state (MTTR) should be the lowest, without incurring unacceptable cost and complexity of the system. Other useful metrics include the mean time between failure (MTBF), which is the mathematical sum of MTTF and MTTR.

The choice for resilience and redundancy is based on the requirements on dependability of the system, with respect to the cost, complexity, and practical impediments involved. In terms of goals and objectives, resilience is often considered as an important requirement as it may be used to describe dependability attributes such as availability, reliability, and performance. However, redundancy in itself is never considered as a goal, since the requirements associated with dependability should be fulfilled with a redundancy configuration incurring the lowest possible cost and complexity.

Troubleshooting with half split and move method

1-Split-Half Troubleshooting:- This technique allows you to locate defects with a few measurements. Essentially the technique involves splitting the circuit path into halves and troubleshooting from the middle of the first half then the second half and so on. The idea is here if you have the circuit and there's many potential test points, you measure at the halfway point. If the signal is good at this point, and you have to know what is a good signal and what is a bad signal, but if you measure here and you have a good reading you know, the problem is probably over here. If the signal is bad, then you would go back and look at this area and then again, you do more tests. This is essentially, what we had talked about, I believe it was in chapter one, where we talked about the idea of divide and conquer.

This part of the troubleshooting process can be the most difficult and the most time-consuming. That's why a logical and methodical plan is so important. We've found that the following order has been effective:

  1. User errors. Check for user errors in the course of gathering information, duplicating the issue, and trying quick fixes. But keep in mind the possibility of incorrectly set switches or preferences, incompatible equipment, and incorrect assumptions on the user's part; take nothing for granted.

  2. Software-related issues. Software that is unusable or that doesn't work with other software, viruses, extension conflicts, duplicate System folders, and other software issues can cause symptoms that may look like hardware issues. But replacing hardware won't solve these problems, and it wastes time and money, so always check for software issues before replacing any hardware. MacTest Pro system software tests can detect and repair many software issues of this type. Remember that you must check applications and the Mac OS.

  3. Software viruses. As you most likely know, a virus is a program that replicates itself and often modifies other programs. When a virus gets into system software, the computer may not start up, the system may stop responding, or the software may work incorrectly. (It may be helpful to define this for a customer who really isn't sure why a virus could be such a big issue.) Although Macintosh computers are less likely to become infected with viruses than computers running other operating systems, it is still possible to get a virus on a Mac. Email attachments and other files downloaded from the Internet are common sources of virus infection.

    To check for a virus, ask customers these questions:
    • Did you recently receive software from another user or a common source and add the software to your system?
    • Did you experience the issue before you obtained the software?
    • Did you share this file with others? Are they having similar issues?

    You can find up-to-date virus information on the Internet at a variety of locations. Third-party virus utilities such as McAfee Virex can check systems and remove viruses from them.If you do detect a virus, make sure you find the original source file and delete it. Then reinstall all affected system and application software, and dispose of any unusable data files.

  4. Hardware issues. When you are convinced that user error, a virus, or other software has not caused the issue, hardware is what is left. Here are some tips:

    • Simplify the issue. Remove external devices and internal cards (except the video card, if needed for display) and test the main unit alone. If the issue remains, you have isolated it to the computer itself. If the issue disappears, reinstall the cards and peripherals one by one, until the symptoms reappear. When they do, you have found the culprit—or at least a clue.
    • If the system can be tested with AHT, do so. This can often save you considerable time when checking for hardware issues.
    • Find the “problem space.” Try to identify the functional area—sometimes called a problem space—that the issue affects. For instance, the general functional areas for a typical Mac could be considered software, logic and control, memory, video, input/output (I/O), and power.

2-Move-the-Problem Troubleshooting Method:-

Move the problem is a very elementary troubleshooting technique that can be used for problem isolation: You physically swap components and observe whether the problem stays in place, moves with the component, or disappears entirely. Figure 1 shows two PCs and three laptops connected to a LAN switch, among which laptop B has connectivity problems. Assuming that hardware failure is suspected, you must discover if the problem is on the switch, the cable, or the laptop. One approach is to start gathering data by checking the settings on the laptop with problems, examining the settings on the switch, comparing the settings of all the laptops, and the switch ports, and so on. However, you might not have the required administrative passwords for the PCs, laptops, and the switch. The only data that you can gather is the status of the link LEDs on the switch and the laptops and PCs. What you can do is obviously limited. A common way to at least isolate the problem (if it is not solved outright) is cable or port swapping. Swap the cable between a working device and laptop B (the one that is having problems). Move the laptop from one port to another using a cable that you know for sure is good. Based on these simple moves, you can isolate whether the problem is cable, switch, or laptop related.

Figure 1 Move the Problem: Laptop B Is Having Network Problems

Just by executing simple tests in a methodical way, the move-the-problem approach enables you to isolate the problem even if the information that you can gather is minimal. Even if you do not solve the problem, you have scoped it to a single element, and you can now focus further troubleshooting on that element. Note that in the previous example if you determine that the problem is cable related, it is unnecessary to obtain the administrative password for the switch, PCs, and laptops. The drawbacks of this method is that you are isolating the problem to only a limited set of physical elements and not gaining any real insight in what is happening, because you are gathering only very limited indirect information. This method assumes that the problem is with a single component. If the problem lies within multiple devices, you might not be able to isolate the problem correctly.

Using this approach alone is not likely to be enough to solve the problem, but if following any of the other methods indicates a potential hardware issue between the consultant's PC and the access switch, this method might come into play. However, merely as a first step, you could consider swapping the cable and the jack connected to the consultant's laptop and the controller's PC, in turn, to see whether the problem is cable, PC, or switch related.


Related Solutions

Write an application case study with network design diagram including the following topics: - IoT -...
Write an application case study with network design diagram including the following topics: - IoT - Bluetooth - Fog Computing - Redundant - Resilient and measures of resilience - Troubleshooting with half split and move method Highlight in bold each of the above keywords.
Write an application case study with network design diagram including the following topics: - IoT -...
Write an application case study with network design diagram including the following topics: - IoT - Bluetooth - Fog Computing - Redundant - Resilient and measures of resilience - Troubleshooting with half split and move method Highlight in bold each of the above keywords.
QUESTION 6 – Case Study [Total = 10 Marks] The Cisco Network Analysis Module (NAM) is...
QUESTION 6 – Case Study [Total = 10 Marks] The Cisco Network Analysis Module (NAM) is an integrated traffic monitoring service module that occupies a single slot in the chassis of the Cisco Catalyst® 6500 Series Switch. It gives Cisco network administrators full application layer visibility, providing this information to the network engineer using a browser from any point on the network. After it is installed, the NAM enables both real-time and historical application monitoring, including data and voice. With...
11. (10 marks) Supply and Demand - Application of Simultaneous Equations (a) (7 marks) Find the...
11. Supply and Demand - Application of Simultaneous Equations (a) Find the equilibrium price and quantity in each of the following markets: (i) Qd = 6 − 2p, Qs = 3 + p; (ii) Qd = 10 – 5/2 p, Qs = 3 + p; (iii) Qd = 1 − p, Qs = 3 + p. Comment on the situation in market (iii). (b) What would be the effect of a purchase/sales tax of 1 cent per item in (ii)...
Case Study-Insurance Company Network Design [5 x 3=15 Marks] Care super insurance company has 150 computers...
Case Study-Insurance Company Network Design [5 x 3=15 Marks] Care super insurance company has 150 computers that are networked together. It also has four servers and uses a star topology wired network to reach employees' offices, with a bus interconnecting four floors in its office building. Because of a staggering influx of Internet business, the network administrator's task is to boost network performance and availability as much as possible. The company also wants a network design that's easy to reconfigure...
Please design the network and draw the network diagram. You have the freedom to assume the...
Please design the network and draw the network diagram. You have the freedom to assume the internal network structure. In the diagram, in addition to the 2 servers, you should also include a few desktop computers and a printer. Please explain the rationale on why you put a computer or a printer in its designated location, 1 example for each location. A subnet (or a segment) is regarded as the same location. (hint: a firewall or a few firewalls, depending...
QUESTION 5: Case Study I - ETHICS CASE STUDY [10 Marks] One for the Road—Anyone? “Florence...
QUESTION 5: Case Study I - ETHICS CASE STUDY [10 Marks] One for the Road—Anyone? “Florence Yozefu is a brilliant scientist who heads a robotics research laboratory at one of the top ten research universities. Florence has been developing wearable robotics gear that can take over the driving functions of a vehicle from a human operator when it is worn by the driver. In laboratory tests, the robot, nicknamed Catchmenot, has performed successfully whenever Florence and her assistants have worn...
QUESTION 5: Case Study I - ETHICS CASE STUDY [10 Marks] One for the Road—Anyone? “Florence...
QUESTION 5: Case Study I - ETHICS CASE STUDY [10 Marks] One for the Road—Anyone? “Florence Yozefu is a brilliant scientist who heads a robotics research laboratory at one of the top ten research universities. Florence has been developing wearable robotics gear that can take over the driving functions of a vehicle from a human operator when it is worn by the driver. In laboratory tests, the robot, nicknamed Catchmenot, has performed successfully whenever Florence and her assistants have worn...
Case Study: 2 (10 marks) Looking at the following conversation between two friends, file the tax...
Case Study: 2 Looking at the following conversation between two friends, file the tax returns of Mr. Ahmed with correct taxable income and tax liabilities: Mr. Abdullah: Good morning Mr. Ahmed. After a very long time. How are you? How is your Family?..... Mr. Ahmed: Good morning brother, everyone is fit and fine except me….. Mr. Abdullah: What happened Ahmed? You are looking so worried Mr. Ahmed: I have to file my tax returns and my accountant has left it...
Case Study 1 ( 10 Marks) Mr. Salim is working as the manager of the marketing...
Case Study 1 ( 10 Marks) Mr. Salim is working as the manager of the marketing department in AL-Bayan company that produces and sells a range of food items. He noticed that the sales are not going as per plan and that the company is not making a profit as it should be. Salim decided to conduct a detailed marketing research to gather all the information and analyze it to find out the real facts that caused problems while marketing...
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT