In: Computer Science
a)
two full implementations that will allow the company to continue to maintain an Internet presence at all sites in the event that a WAN circuit at one site goes down.
A collection of networks that fall within the same
administrative domain is called an autonomous system (AS). In this
question, each datacenter will be an autonomous system.
The routers within an AS use an interior gateway protocol, such as
the Routing Information Protocol (RIP) or the Open Shortest Path
First (OSPF) protocol, to exchange routing information among
themselves. At the edges of an AS are routers that communicate with
the other AS’s on the Internet, using an exterior gateway protocol
such as the Border Gateway Protocol (BGP).
If a WAN link goes down, BGP will route data through another WAN
link if redundant WAN links are available.
Routing Information Protocol (RIP) is a protocol that routers can use to exchange network topology information. It is characterized as an interior gateway protocol, and is typically used in small to medium-sized networks. A router running RIP sends the contents of its routing table to each of its adjacent routers every 30 seconds. When a route is removed from the routing table, it is flagged as unusable by the receiving routers after 180 seconds, and removed from their tables after an additional 120 seconds.
There are two versions of RIP (the managed switch supports both):
You can configure a given port to do the following:
Open Shortest Path First (OSPF) is a link-state routing protocol that is used to find the best path between the source and the destination router using its own Shortest Path First). OSPF is developed by Internet Engineering Task Force (IETF) as one of the Interior Gateway Protocol (IGP), i.e, the protocol which aims at moving the packet within a large autonomous system or routing domain. It is a network layer protocol which works on the protocol number 89 and uses AD value 110. OSPF uses multicast address 224.0.0.5 for normal communication and 224.0.0.6 for update to designated router(DR)/Backup Designated Router (BDR)
b)
The Fragmentation-Redundancy-Scattering (FRS) is a technique that can be used to achieve security and fault-tolerance [1]. It consists in fragmenting the condential data and scattering the resulting fragments across several archive sites of the distributed system. Fragmentation is performed so that any isolated fragment contains no signicant information. Scattering is performed in such a way that each archive contains just a subset of (unrelated) fragments and that each fragment is stored in more than one site. The technique provides intrusion-tolerance: if a node is compromised, the intruder has no access to relevant information (and if compromised fragments are deleted or altered, they can be recovered from other nodes).
Redundancy:
The underlying concept of redundant networks is simple. Without any backup systems in place, all it takes is one point of failure in a network to disrupt or bring down an entire system. Network redundancy is the process of adding additional instances of network devices and lines of communication to help ensure network availability and decrease the risk of failure along the critical data path.
Generally speaking, there are two forms of redundancy that data centers use to ensure systems will stay up and running:
In the event of a failure, redundancy allows your network to remain in service by providing alternative data paths or backup equipment. Network redundancy is introduced to improve reliability and ensure availability. The basic concept that if one device fails, another can automatically take over.
patching:
A typical corporate network will include servers and user endpoints running a large number of different applications on top of different operating systems. These applications probably include software developed in-house, proprietary commercial software packages, and open-source applications. Most software will have been approved by the IT department, but most corporate networks will also include an element of unauthorized consumer-oriented software installed by end users as well.
The goal of patch management is to ensure that all applications running on the network are patched to ensure that they are secure and stable, and achieving this goal involves:
c)
configure the test lab for updates:
To prevent the service pack issues make sure, before going ahead and applying a new Service Pack in your production environment, to validate them in a test/lab environment first.
server applications that could be affected are:
1) DNS issues and network connectivity
An essential element of successful web traffic management is DNS queries, which is why an issue with these systems can result in a plethora of issues. Without the proper protection, faulty DNS queries can prevent visitors from reaching your website, while also causing errors, 404s, and incorrect pathways. In a similar vein, network connectivity and an efficient firewall are key parts of your site’s access and productivity.
The best way to tackle these issues is by implementing DNS monitoring safeguards to identify what’s causing them. You should also check your switches, VLAN tags, and distribute tasks between servers.
2) Slow servers and loading time
If your servers are particularly slow, they could be hosted using a shared account, which means that your site is sharing the server with hundreds, possibly thousands of other websites. You can address this common roadblock by checking with your hosting company to determine whether or not the site is hosted on a dedicated server.
If you’re hoping to see just how slow your site is, go to Google and use its PageSpeed Insights tool. All you have to do is enter your domain name and click Analyze. The tool looks at the contents of the site and identifies the elements that are making it run slower. The tool churns out suggestions that will help your website run faster.
3) Poorly written code
Another web application performance problem that many face is with poorly written code, which could refer to inefficient code, memory leaks, or synchronization issues. Your application could also deadlock due to ineffectual algorithms, as well as the performance degradation of a web application. Old versions of software or integrated legacy systems can also take a toll on your website’s performance.
You can tackle this issue by ensuring that your developers are using the optimal coding practices, as well as some automated tools such as profilers and code reviews.
4) Lack of load balancing
Slow response times can also be caused by poor load distribution. When new site visitors are assigned incorrectly, it can drown out your servers even if the system is under capacity. Such an issue can cause a slow response time, especially if your site is receiving too many requests.
Tools such as NeoLoad and AppPerfect help you find infrastructural weaknesses that you may be experiencing, while also testing problem areas. You should also work on a cluster of servers instead of simply having a single server that takes all the load. Service-oriented architecture (SOA) can help with scalability issues when more servers are added. This design tool causes application components to provide services to the site’s other components through the communication protocol.
5) Traffic spikes
Spikes happen, especially during a marketing promotion with videos, and a company may not be prepared for the extra traffic. This issue can also cause your servers to slow down, hindering the performance of your site and harming your brand.
One solution is by setting up an early warning system using simulated user monitoring systems such as NeoSense. Doing so will help you see when traffic is impacting transactions before users are affected negatively by the experience.
6) Specific HTML title tags
Even the name of your website can affect its performance as HTML title tags are essential to its success. These tags sum up the entire content of your website or web page to major search engines such as Google. However, a lack of specificity in your domain name can lower its visibility. This is due to the fact that sometimes site owners use the same title throughout their website, which causes search engines to look for duplicate title tags and pares them, causing sites to lose traffic.
You can tackle this issue by doing a name search as “site:yourdomain.com.” Go to Google Search Console (which used to be known as Google Webmaster Tools) to analyze your website. The tool will offer you information on HTML errors such as missing title tags, duplicate meta descriptions, missing descriptions and more.
7) Failing to optimize bandwidth usage
When developing and testing a site, businesses often rely on a local network environment. This may not seem like an issue at first because adding visual, audio, video or other high-volume data may not affect your local network. However, consumers accessing the website at home through their smartphones may face a series of issues you weren’t anticipating.
Make sure you optimize your bandwidth usage for a performance boost. Some of the elements you can include are the minification of JavaScript, the minification of all CSS, server side HTTP compression and optimization of image size and resolution.