In: Computer Science
Consider sending a large file from a host to another over a TCP connection that has no loss. Suppose TCP uses AIMD for its congestion control without slow start. The initial MSS=4 and cwnd increases by 1 MSS every time a batch of ACKs is received. Assume approximately constant round-trip times. a. how long does it take for cwnd increase from 5 MSS to 9 MSS (assuming no loss events)? b. What is the average throughout (in terms of MSS and RTT) for this connection up through time =6 RTT?
Answer : Given data
* Suppose TCP uses AIMD for its congestion control without slow start.
* The initial MSS=4 and cwnd increases by 1 MSS every time a batch of ACKs is received.
=>TCP uses AIMD for congestion control without slow start.
=>Initial window size = 4 MSS
=>Congestion window(cwnd) increases by 1 MSS every RTT.
(a)
=>cwnd increases from 5 MSS to 9 MSS
Explanation:
=>cwnd window will increase by 1 MSS every RTT
5 MSS | 6 MSS | 7 MSS | 8 MSS | 9 MSS
=>As we can see we are required 4 RTT to reach 9 MSS from 5 MSS cwnd size hence time required = 4 RTT
(b)
=>Time = 6 RTT
Explanation:
=>cwnd window will increase by 1 MSS every RTT
=>Initial window size = 4 MSS
4 MSS | 5 MSS | 6 MSS | 7 MSS | 8 MSS | 9 MSS | 10 MSS
=>We can see after 6 RTT congestion window(cwnd) = 10 MSS
Calculating average throughput:
=>Average throughput = window size/time
=>Average throughput = 10 MSS/6 RTT
=>Average throughput = 5 MSS/3 RTT
___________THE END____________