Question

In: Computer Science

(3) -> The arrival of packets at an Ethernet adapter of a web server is described...

(3) -> The arrival of packets at an Ethernet adapter of a web server is described by a Poisson process with a rate of 100 packets per second . Packets that arrive to the Ethernet adapter described in the above problem (3) are queued up in a buffer until processed by the Interrupt Service Routine (ISR). Assuming that the ISR service time per packet is exponential with an average of 9.6 milliseconds. Answer the following questions:

(a) What is the capacity of the Ethernet adapter?
(b) Write down the probability distribution of the number of packets at the Ethernet adapter (either queued or being transmitted).
(c) How many packets do you expect to find waiting in the buffer of the adapter at any point in time (on average)?
(d) What is the average waiting time in the buffer (i.e., how long is a packet buffered untilthe ISR starts processing it)?
(e) What is the slowdown caused by queuing at the Ethernet adapter?

Solutions

Expert Solution

  • When the capability or the information measure demand of the individual logical partition is inconsistent with, or is less than, the total bandwidth of a physical local area network adapter. If logical partitions use the full bandwidth or capability of a physical local area network adapter, use dedicated Ethernet adapters.
  • When you would like Associate in Nursing local area network association, but there is no slot obtainable during which to put in a fanatical adapter.
  • Linux OS, statistical time period communication, Ethernet, fast local area network, medium access time bound, packet-arrival rate, medium access control layer, automated producing systems, connection admission management downside, probability, mathematical analysis, middleware, transport layer, Ethernet datalink layer, packet stream smoothing, OS kernel, TCP, UDP/IP stack, packet transmission, persistent CSMA/CD, collision resolution protocol, binary exponential backoff, simulation, analytic model, packet-loss ratio
  • Same as M/M/1 but the packet transmission time distribution is general, with given mean 1/ µ and variance σ 2
  • Utilization factor ρ = λ / µ
  • Pollaczek-Kinchine formula for Average time in queue = λ ( σ a pair of + 1/ µ 2)/2(1- ρ ) Average delay = 1/ µ + λ ( σ 2 + 1/ µ 2)/2(1- ρ)
  • The formulas for the steady-state occupancy probabilities ar a lot of difficult • Insight: As σ2 will increase, delay increases
  • Arrival process is Poisson with rate λ packets/sec – Packet transmission times ar exponentially distributed with mean 1/ µ – One server – freelance interarrival times and packet transmission times • Transmission time is proportional to packet length
    • Note 1/ µ is secs/packet so µ is packets/sec
    • operation factor: ρ = λ/µ (constant method if ρ < 1
  • All Internet routers contain buffers to hold packets throughout times of congestion. Today, the size of the buffers is decided by the dynamics of TCP’s congestion control rule. In particular, the goal is to make positive that once a link is engorged, it is busy 100% of the time; that is appreciate ensuring its buffer ne'er goes empty. A widely used rule-of-thumb states that every link desires a buffer of size B = RTT × C, where RTT is the average round-trip time of a flow fleeting diagonally the link, and C is the records rate of the link. For example, a 10Gb/s router line card wants roughly 250ms × 10Gb/s = 2.5Gbits of buffers; and the quantity of buffer grows linearly with the line-rate. Such huge buffer are testing for router manufacturer, who should use big prolonged, off-chip DRAMs. And queuing delays can be long, have high variance, and may dent the congestion control algorithms. In this paper we argument that the rule-of-thumb (B = RTT ×C) is now out-of-date and untrue for backbone routers. This is because of the huge figure of flow   multiplexed in concert on a single back link.
  • TCP doesn't stop operational when flow manage is enabled. However, an significant part of it does stop ready correctly: its have flow manage mechanism . This allows TCP to use network links in a somewhat intelligent manner, as an overfull network or device will cause some TCP segment to be lost and thus cause the sender to send data at a slower rate.
  • Now think what happens when Ethernet flow control is varied with TCP flow control. Let's think that we have two straight linked computers, one of which is much slower than the other. The quicker sending processor starts transfer lots of data to the slower getting computer. The sender finally notices that it is receiving overloaded with data and sends a pause frame to the sender. The dispatcher see the recess enclose and stops transfer temporarily. formerly the pause frame expire, the sender will resume causing its flood of information to the opposite pc. Unfortunately, the TCP engine on the sender can not acknowledge that the receiver is full, as there was no lost information -- the receiver can generally stop the sender before it loses any data. Thus, the sender will continue to speed up at Associate in Nursing exponential rate; as a result of it did not see any lost information, it will send information double as quick as before! as a result of the receiver encompasses a permanent speed disadvantage, this will need the receiver to transfer pause frames double as typically. Things start snowballing till the receiver pauses the sender therefore typically that the sender starts dropping its own information before it sends it, and thus finally sees some information being lost and slows down.
  • Is this a problem? In some ways it is not. Because communications protocol is a reliable protocol, nothing is ever really "lost"; it is merely retransmitted and life goes on. Ethernet flow management accomplishes the same issue as {tcp|transmission management protocol|TCP|protocol|communications protocol} flow control during this state of affairs, as they both slow down the information transmission to the speed that the slower device will handle. There are some arguments to be created for there being Associate in Nursing awkward overlap between the 2 flow management mechanisms, but it might be worse.

Related Solutions

Suppose a TCP client needs to send 3 packets to the TCP server. Before sending the...
Suppose a TCP client needs to send 3 packets to the TCP server. Before sending the first packet, the estimated RTT is 50 ms, and the estimated deviation of the sample RTT is 10 ms. The parameters α= 0.1, and β = 0.2. The measured sample RTT for the three packets are 60ms, 70 ms, and 40 ms, respectively. Please compute the time out value that was set for each packet right after it is being transmitted out.
Web Server is the computer that stores Web Server Software and Website. If you are running...
Web Server is the computer that stores Web Server Software and Website. If you are running some service like Food Panda which type of Hosting Server will be used. Answer your question by discussion and comparison of different types of web hosting? If you have low budget so what will be the best possible hosting plan in this situation? Justify your answer by logical reasoning.
Discuss the main similarity and difference between a dedicated web server and a co-located web server....
Discuss the main similarity and difference between a dedicated web server and a co-located web server. Group of answer choices Both of them are mainly used for small to medium-size web sites. Both of them are mainly used for large to enterprise-size web sites. Both of them are kept and connected to the Internet at the web host provider's location. One of them is kept and connected to the Internet at the web host provider's location, while the other is...
Describe the process involving the transmission of a Web page from a Web server to a...
Describe the process involving the transmission of a Web page from a Web server to a user’s computer.
AWS screenshot of a view of the web browser connection to your web server via the...
AWS screenshot of a view of the web browser connection to your web server via the load balancer (step 5 of this lab document).
A queue with one server without buffer, the probability of a customer’ arrival and departure in...
A queue with one server without buffer, the probability of a customer’ arrival and departure in a time unit is p and q respectively. Please try to 1) give the one step state transition probability matrix. 2) give the balance equations. 3) calculate the limiting probabilities for p=0.3 and q=0.5. (12 points)
How are the web frameworks - Spring, Google Web Toolkit, and Java Server Faves - similar...
How are the web frameworks - Spring, Google Web Toolkit, and Java Server Faves - similar and how are they different?
The goal of this lab is to write a simple, but functional, web server that is...
The goal of this lab is to write a simple, but functional, web server that is capable of sending files to a web browser on request. The web server must create a listening socket and accept connections. It must implement just enough of the HTTP/1.1 protocol to enable it to read requests for files and to send the requested files. It should also be able to send error responses to the client when appropriate. It would be useful to see...
A small company network have multiple servers (including a web server, a log server, DNS servers,...
A small company network have multiple servers (including a web server, a log server, DNS servers, a file server for inventory information and customer orders, but no email server) , two firewalls, DMZ, and PCs. The company sales products online. a). Suppose that you are a system administrator. What types of network connections will you allow to be established with the servers in the DMZ from the Internet? b). What are the points of entry for attackers? c). How do...
Computer Networking Proxy Server Related Question: Please explain the caching algorithms of a web proxy server...
Computer Networking Proxy Server Related Question: Please explain the caching algorithms of a web proxy server and include following concepts: Greedy Dual Size, least recently used, and least frequently used.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT