In: Computer Science
Why does TCP operate poorly in Wireless Networks? Given an example to illustrate the problem.
`Hey,
Note: Brother in case of any queries, just comment in box I would be very happy to assist all your queries
TCP stacks need a way to determine when there is congestion in the network so they can slow down and not overwhelm an already congested network with packets that will eventually be discarded. TCP Congestion Control Algorithm, this kind of congestive collapse was common on busy, bandwidth-constrained networks. Classic TCP stacks recognize packet loss as a signal of congestion, mostly because it is easy to detect and measure. Unfortunately, a medium like wireless has packet loss which occurs naturally, a phenomenon which TCP researchers call a "leaky pipe.". Classic TCP throughput is retarded by wireless losses since a TCP stack treats this loss as congestion and reduces its pacing using the Additive Increase, Multiplicative Decrease (AIMD) model.
Packet loss rate is the most significant impact.
TCP will downgrade dramatically when loss rate increase. Based on
my previous test, TCP performance would drop from 100Mbps to 10Mbps
at 1% loss.
In wireless environment, packets loss always happen, especially
when there are obstacles or wireless signal pollution happens.
Kindly revert for any queries
Thanks.