Flow Control in Wireless Ad-hoc Networks
Publication or External Link
We are interested in maximizing the Transmission Control Protocol (TCP) throughput
between two nodes in a single cell wireless ad-hoc network. For this, we follow a
cross-layer approach by first developing an analytical model that captures the effect
of the wireless channel and the MAC layer to TCP. The analytical model gives the time
evolution of the TCP window size which is described by a stochastic differential equation
driven by a point process. The point process represents the arrival of acknowledgments
sent by the TCP receiver to the sender as part of the self-regulating mechanism of the flow
control protocol. Through this point process we achieve a cross-layer integration between
the physical layer, the MAC layer and TCP. The intervals between successive points describe
how the packet drops at the wireless channel and the delays because of retransmission at
the MAC layer affect the window size at the TCP layer. We fully describe the statistical
behavior of the point process by computing first the p.d.f. for the inter-arrival intervals and
then the compensator and the intensity of the process parametrized by the quantities that describe the MAC layer and the wireless channel.
To achieve analytical tractability we concentrate on the pure (unslotted) Aloha for the
MAC layer and the Gilbert-Elliott model for the channel. Although the Aloha protocol
is simpler than the more popular IEEE 802.11 protocol, it still exhibits the same exponential backoff mechanism which is a key factor for the performance of TCP in a wireless network. Moreover, another reason to study the Aloha protocol is that the protocol and its variants
gain popularity as they are used in many of today's wireless networks.
Using the analytical model for the TCP window size evolution, we try to increase the TCP throughput between two nodes in a single cell network. We want to achieve this by
implicitly informing the TCP sender of the network conditions. We impose this additional
constraint so we can achieve compatibility between the standard TCP and the optimized
version. This allows the operation of both protocol stacks in the same network.
We pose the optimization problem as an optimal stopping problem. For each packet
transmitted by the TCP sender to the network, an optimal time instance has to be
computed in the absence of an acknowledgment for this packet. This time instance
indicates when a timeout has to be declared for the packet. In the absence of an acknowledgment, if the sender waits long for declaring a timeout, the network is
underutilized. If the sender declares a timeout soon, it minimizes the transmission
rate. Because of the analytical intractability of the optimal stopping time problem,
we follow a Markov chain approximation method to solve the problem numerically.