TY - GEN
T1 - A state action frequency approach to throughput maximization over uncertain wireless channels
AU - Jagannathan, Krishna
AU - Mannor, Shie
AU - Menache, Ishai
AU - Modiano, Eytan
PY - 2011
Y1 - 2011
N2 - We consider scheduling over a wireless system, where the channel state information is not available a priori to the scheduler, but can be inferred from the past. Specifically, the wireless system is modeled as a network of parallel queues. We assume that the channel state of each queue evolves stochastically as an ON/OFF Markov chain. The scheduler, which is aware of the queue lengths but is oblivious of the channel states, has to choose one queue at a time for transmission. The scheduler has no information regarding the current channel states, but can estimate them by using the acknowledgment history. We first characterize the capacity region of the system using tools from Markov Decision Processes (MDP) theory. Specifically, we prove that the capacity region boundary is the uniform limit of a sequence of Linear Programming (LP) solutions. Next, we combine the LP solution with a queue length based scheduling mechanism that operates over long frames, to obtain a throughput optimal policy for the system. By incorporating results from MDP theory within the Lyapunov-stability framework, we show that our frame-based policy stabilizes the system for all arrival rates that lie in the interior of the capacity region.
AB - We consider scheduling over a wireless system, where the channel state information is not available a priori to the scheduler, but can be inferred from the past. Specifically, the wireless system is modeled as a network of parallel queues. We assume that the channel state of each queue evolves stochastically as an ON/OFF Markov chain. The scheduler, which is aware of the queue lengths but is oblivious of the channel states, has to choose one queue at a time for transmission. The scheduler has no information regarding the current channel states, but can estimate them by using the acknowledgment history. We first characterize the capacity region of the system using tools from Markov Decision Processes (MDP) theory. Specifically, we prove that the capacity region boundary is the uniform limit of a sequence of Linear Programming (LP) solutions. Next, we combine the LP solution with a queue length based scheduling mechanism that operates over long frames, to obtain a throughput optimal policy for the system. By incorporating results from MDP theory within the Lyapunov-stability framework, we show that our frame-based policy stabilizes the system for all arrival rates that lie in the interior of the capacity region.
UR - http://www.scopus.com/inward/record.url?scp=79960857090&partnerID=8YFLogxK
U2 - 10.1109/INFCOM.2011.5935211
DO - 10.1109/INFCOM.2011.5935211
M3 - منشور من مؤتمر
SN - 9781424499212
T3 - Proceedings - IEEE INFOCOM
SP - 491
EP - 495
BT - 2011 Proceedings IEEE INFOCOM
T2 - IEEE INFOCOM 2011
Y2 - 10 April 2011 through 15 April 2011
ER -