Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2001, Operations Research
We consider transmission policies for multiple users sharing a single wireless link to a base station. The noise, and hence the probability of correct transmission of a packet, depends on the state of the user receiving the packet. The state for each user is independent of the states of the other users and changes according to a two-state (good/bad) Markov chain. The state of a user is observed only when it transmits. We give conditions under which the optimal policy is the myopic policy, in which a packet is transmitted to the user that is most likely to be in the better of the two states. We do this by showing that the optimal value function is marginally linear in each of the users' probabilities of being in the good state. Our model also may be applied to flexible manufacturing systems with unreliable tools and networked computer systems.
IEEE Transactions on Automatic Control, 2008
We consider the problem of transmitting packets over a randomly varying point to point channel with the objective of minimizing the expected power consumption subject to a constraint on the average packet delay. By casting it as a constrained Markov decision process in discrete time with time-averaged costs, we prove structural results about the dependence of the optimal policy on buffer occupancy, number of packet arrivals in the previous slot and the channel fading state for both i.i.d. and Markov arrivals and channel fading. The techniques we use to establish such results: convexity, stochastic dominance, decreasing-differences, are among the standard ones for the purpose. Our main contribution, however, is the passage to the average cost case, a notoriously difficult problem for which rather limited results are available. The novel proof techniques used here are likely to have utility in other stochastic control problems well beyond their immediate application considered here.
IEEE INFOCOM 2009 - The 28th Conference on Computer Communications, 2009
We present a novel optimization framework based on stochastic control and Markov theory for wireless networks where users concurrently access the channel and implement retransmission-based error control. In order to let users transmit at the same time, we consider an interference mitigation, rather than a collision avoidance approach. Our focus is on the interaction between the stochastic processes modeling the various individual sources of the network. Due to retransmissions, transmission by a user does not only instantaneously interfere with other simultaneous communications, but also biases the future evolution of the stochastic processes describing the other users. We, thus, define a novel interference measure called process distortion, that takes this effect into account. We investigate the optimization of access and power control for a network with two groups of users where transmission by the second group is constrained by the process distortion generated to the first group. We present algorithms to solve the unconstrained and constrained infinite horizon average cost per stage problems modeling this scenario. We discuss in detail the application of this framework to cognitive networks.
Int'l J. of Communications, Network and System Sciences, 2012
We analyze a cell with a fixed number of users in a time period network. The base station schedules to serve at most one user in a given time period based on information about the available data rates and other parameter(s) for all the users in the cell. We consider infinitely backlogged queues and model the system as a Markov Decision Process (MDP) and prove the monotonicity of the optimal policy with respect to the "starvation age" and the available data rate. For this, we consider both the discounted as well as the long-run average criterion. The proofs of the monotonicity properties serve as good illustrations of analyzing MDPs with respect to their optimal solutions.
IEEE Transactions on Communications, 2000
This paper considers an uplink time division multiple access (TDMA) cognitive radio network where multiple cognitive radios (secondary users) attempt to access a spectrum hole. We assume that each secondary user can access the channel according to a decentralized predefined access rule based on the channel quality and the transmission delay of each secondary user. By modeling secondary user block fading channel qualities as a finite state Markov chain, we formulate the transmission rate adaptation problem of each secondary user as a general-sum Markovian dynamic game with a delay constraint. Conditions are given so that the Nash equilibrium transmission policy of each secondary user is a randomized mixture of pure threshold policies. Such threshold policies can be easily implemented. We then present a stochastic approximation algorithm that can adaptively estimate the Nash equilibrium policies and track such policies for non-stationary problems where the statistics of the channel and user parameters evolve with time.
IEEE Transactions on Information Theory, 2004
We consider the problem of scheduling packets over channels with time-varying quality. This problem received a lot of attention lately in the context of devising methods for providing quality of service in wireless communications. Earlier work in this problem considered two cases. One case is that the arrival rate vector is in the throughput region and then policies that stabilize the system are pursued. The other case is that all packet queues are saturated and then policies that optimize an objective function of the channel throughputs are investigated. In this paper we address the case where no assumption on the arrival rates is made. We obtain a scheduling policy that maximizes the weighted sum of channel throughputs. Under the optimal policy, in the general case, the system may operate in a regime where some queues are stable while the rest become saturated. If stability for the whole system is at all possible, it is always achieved. The optimal policy is a combination of a criterion that gives priorities based on queue lengths and a strict priority rule. The scheduling mechanism switches between the two criteria based on thresholds on the queue lengths and is modulated by the availability of the channels. The analysis of the operation of the system involves the study of a vector process which in steady state has some of its components stable while the others are unstable. We adopted a novel model for time-varying channel availability that dispenses with the statistical assumptions and makes a rigorous description of system dynamics possible. * This paper has been presented in part at the IEEE Infocom '03, April
IEEE Journal on Selected Areas in Communications, 2015
Network-coded cooperative communication (NC-CC) has been proposed and evaluated as a powerful technology that can provide a better quality of service in the next-generation wireless systems, e.g., D2D communications. Previous contributions have focused on performance evaluation of NC-CC scenarios rather than searching for optimal policies that can minimize the total cost of reliable packet transmission. We break from this trend by initially analyzing the optimal design of NC-CC for a wireless network with one source, two receivers, and half-duplex erasure channels. The problem is modeled as a special case of Markov decision process (MDP), which is called stochastic shortest path (SSP), and is solved for any field size, arbitrary number of packets, and arbitrary erasure probabilities of the channels. The proposed MDP solution results in an optimal transmission policy per time slot, and we use it to design near-optimal heuristics for packet transmission in a network of one source and N ≥ 2 receivers. We also present numerical results that illustrate the performance of the proposed heuristics under a variety of scenarios. To complete our analysis, our heuristics are implemented in Aalborg University's Raspberry Pi testbed and compared with random linear network coding (RLNC) broadcast in terms of completion time, total number of required transmissions, and percentage of delivered generations. Our measurements show that enabling cooperation only among pairs of devices can decrease the completion time by up to 4.75 times, while delivering 100% of the 10 000 generations transmitted, as compared to RLNC broadcast delivering only 88% of them in our tests.
—We study the effect of stochastic wireless channel models on the connectivity of ad hoc networks. Unlike in the deterministic geometric disk model where nodes connect if they are within a certain distance from each other, stochastic models attempt to capture small-scale fading effects due to shadowing and multipath received signals. Through analysis of local and global network observables, we present conclusive evidence suggesting that network behaviour is highly dependent upon whether a stochastic or deterministic connection model is employed. Specifically we show that the network mean degree is lower (higher) for stochastic wireless channels than for deterministic ones, if the path loss exponent is greater (lesser) than the spatial dimension. Similarly, the probability of forming isolated pairs of nodes in an otherwise dense random network is much less for stochastic wireless channels than for deterministic ones. The latter realisation explains why the upper bound of k-connectivity is tighter for stochastic wireless channels. We obtain closed form analytic results and compare to extensive numerical simulations.
Proceeding from the 2006 workshop on Game theory for communications and networks - GameNets '06, 2006
Wireless Networks and Mobile Communications, 2011
IEEE Transactions on Mobile Computing, 2000
In this paper, the problem of throughput optimization in decentralized wireless networks with spatial randomness under queue stability and packet loss constraints is investigated. Two key performance measures are analyzed, namely the effective link throughput and the network spatial throughput. Specifically, the combination of medium access probability, coding rate, and maximum number of retransmissions that maximize each throughput metric is analytically derived for a class of Poisson networks, in which packets arrive at the transmitters following a geometrical distribution. Necessary conditions so that the effective link throughput and the network spatial throughput are stably achievable under bounded packet loss are determined, as well as upper bounds for both cases by solving the unconstrained optimization problem. Our results show in which system configuration stable achievable throughputs can be obtained as a function of the network density and the arrival rate. They also evince conditions for which the per-link throughput-maximizing operating points coincide or not with the aggregate network throughput-maximizing operating regime.
ACM Transactions on Modeling and Computer Simulation, 2010
High-Speed Downlink Packet Access (HSDPA) provides high cell peak data rate (up to 10.8 Mbps) on the downlink by incorporating Adaptive Modulation and Coding (AMC) and fast scheduling. Scheduling is one of the most important QoS control approaches in this system, and can significantly affects overall system performance.
Wireless Networks, 2006
We develop scheduling strategies for carrying multimedia traffic over a polled multiple access wireless network with fading. We consider a slotted system with three classes of traffic (voice, streaming media and file transfers). A Markov model is used for the fading and also for modeling voice packet arrivals and streaming arrivals. The performance objectives are a loss probability for voice, mean network delay for streaming media, and time average throughput for file transfers. A central scheduler (e.g., the access point in a single cell IEEE 802.11 wireless local area network (WLAN)) is assumed to be able to keep track of all the available state information and make the scheduling decision in each slot (e.g., as would be the case for PCF mode operation of the IEEE 802.11 WLAN). The problem is modeled as a constrained Markov decision problem. By using constraint relaxations (a linear relaxation and Whittle type relaxations) an index based policy is obtained. For the file transfers the decision problem turns out to be one with partial state information. Numerical comparisons are provided with the performance obtained from some simple policies. Keywords Scheduling over fading wireless channels. Indexability and index policies. QoS in 802.11 wireless LANs
Operations Research and Management Science, 2017
Wireless devices are often able to communicate on several alternative channels; for example, cellular phones may use several frequency bands and are equipped with base-station communication capability together with WiFi and Bluetooth communication. Automatic decision support systems in such devices need to decide which channels to use at any given time so as to maximize the long-run average throughput. A good decision policy needs to take into account that, due to cost, energy, technical, or performance constraints, the state of a channel is only sensed when it is selected for transmission. Therefore, the greedy strategy of always exploiting those channels assumed to yield the currently highest transmission rate is not necessarily optimal with respect to long-run average throughput. Rather, it may be favourable to give some priority to the exploration of channels of uncertain quality.
Methodology and Computing in Applied Probability, 2019
We consider a simple discrete time controlled queueing system, where the controller has a choice of which server to use at each time slot and server performance varies according to a Markov modulated random environment. We explore the role of information in the system *
IEEE Transactions on Wireless Communications, 2000
Wireless networks, as an indispensable part of the Internet, are expected to support diverse quality of service requirements and traffic characteristics. This paper presents stochastic performance analysis of a wireless network with finite-state Markov channel (FSMC). The analysis is based on stochastic network calculus. Specifically, the analytical principle behind stochastic network calculus is first presented. Then, based on this principle, delay and backlog upper bounds are derived. Both the single user case and the multi-user case are considered. For the multi-user case, two channel sharing methods among eligible users are studied, which are the even sharing method and the exclusive use method. In the former, the channel service rate is evenly divided among eligible users; in the latter, it is exclusively used by a user randomly selected from the eligible users. When the even sharing method is studied, the problem that the state space is exponentially increased with the user number is addressed through a novel approach. The essential idea of this approach is to construct a new Markov modulation process from the channel state process. In the new process, the multi-user effects are equivalently reflected by its transition and steady-state probabilities and the state space size is kept unchanged even with the increase of the user number, which significantly reduces the complexity in calculating the obtained backlog and delay bounds. Finally, the proposed analytical principle and methods are validated through comparison between analytical and simulation results.
This paper investigates the benefits of cooperation in large wireless networks with multiple sources and relays, where the nodes form an homogeneous Poisson point process. The source nodes may dispose of their nearest neighbor from the set of inactive nodes as their relay. Although cooperation can potentially lead to significant improvements on the asymptotic error probability of a communication pair, relaying causes additional interference in the network, increasing the average noise. We address the basic question: how should source nodes optimally balance cooperation vs. interference to guarantee reliability in all communication pairs. Based on the decode-and-forward (DF) scheme at the relays (which is near optimal when the relays are close to their corresponding sources), we derive closed-form approximations to the upper bounds on the error probability, averaging over all node positions. Surprisingly, in the small node-density regime, there is an almost binary behavior that dictates -depending on network parametersthe activation or not of all relay nodes.
Theory of Probability & Its Applications, 2006
The paper concerns both controlled diffusion processes, and processes in discrete time. We establish conditions under which the strategy minimizing the expected value of a cost functional has a much stronger property; namely, it minimizes the random cost functional itself for all realizations of the controlled process from a set, the probability of which is close to one for large time horizons. The main difference of the conditions mentioned from those obtained earlier is that the former do not deal with strategies optimal in the mean themselves but concern a possibility of transition of the controlled process from one state to another in a time with a finite expectation. It makes the verification of these conditions in a number of situations much easier.
2004
In this paper we consider the problem of joint optimal power and admission control for a single user queue. The user may be in a finite set of channel fading states, each of which corresponds to a certain power-rate function. The user may choose a particular transmission power level and incur a cost that increases with the power. Packets arrive at the queue as a Poisson process with a constant rate. The user may choose a dropping probability for the incoming packet, which incurs a cost that increases with the dropping probability. Packets remaining in the queue also incur a holding cost. The goal is for the user to choose optimally its transmission power/rate and its admission rate so as to minimize the sum of the above costs. These costs model the tradeoff between increasing transmission power, increasing packet delay, and dropping a packet. In this paper we investigate a number of monotonicity properties of the optimal solution to the above problem. Specifically we prove the following results under an optimal strategy: (1) The output or transmission rate of a queue does not decrease and the acceptance rate does not increase as the queue size increases; (2) The output rate does not decrease and the acceptance rate does not increase as the time horizon (or steps to go) increases; (3) For a fixed transmission rate, we show that the acceptance rate does not increase as the system enters a worse fading state; (4) Under certain conditions on the arrival and maximum transmission rates, we show that the output rate does not increase as the system enters a worse fading state. These results provide good insight on the structure of the problem.
2009
We consider finite number of users, with infinite buffer storage, sharing a single channel using the aloha medium access protocol. This is an interesting example of a non saturated collision channel. We investigate the uplink case of a cellular system where each user will select a desired throughput. The users then participate in a non cooperative game wherein they adjust their transmit rate to attain their desired throughput. We show that this game, in contrast to the saturated case, either has no Nash Equilibrium or has infinitely many Nash Equilibria. Further, we show that the region of NE coincides with an appropriate 'stability region'. We also discuss the efficiency of the equilibria in term of energy consumption and congestion rate. Next, we propose two learning algorithms using a stochastic iterative procedure that converges to the best Nash equilibrium. For instance, the first one needs partial information (transmit rates of other users during the last slot) whereas the second is an information less and fully distributed scheme. We approximate the control iterations by an equivalent ordinary differential equation in order to prove that the proposed stochastic learning algorithm converges to a Nash equilibrium even in the absence of any coordination or extra information. Extensive numerical examples and simulations are provided to validate our results.
2015 Twenty First National Conference on Communications (NCC), 2015
We consider a wireless channel shared by multiple transmitter-receiver pairs. Their transmissions interfere with each other. Each transmitter-receiver pair aims to maximize its long-term average transmission rate subject to an average power constraint. This scenario is modeled as a stochastic game. We provide sufficient conditions for existence and uniqueness of a Nash equilibrium (NE). We then formulate the problem of finding NE as a variational inequality (VI) problem and present an algorithm to solve the VI using regularization. We also provide distributed algorithms to compute Pareto optimal solutions for the proposed game.
2017
This paper dedicates to exploring and exploiting the hidden resource in wireless channel. We discover that the stochastic dependence in wireless channel capacity is a hidden resource, specifically, if the wireless channel capacity bears negative dependence, the wireless channel attains a better performance with a smaller capacity. We find that the dependence in wireless channel is determined by both uncontrollable and controllable parameters in the wireless system, by inducing negative dependence through the controllable parameters, we achieve dependence control. We model the wireless channel capacity as a Markov additive process, i.e., an additive process defined on a Markov process, and we use copula to represent the dependence structure of the underlying Markov process. Based on a priori information of the temporal dependence of the uncontrollable parameters and the spatial dependence between the uncontrollable and controllable parameters, we construct a sequence of temporal copu...
This project aims to deal with less reliability of nodes in a wireless network, and study behaviour of each nodes in a dynamic fashion by splitting into pre-defined number of packets and attaching an information with each packet to study the link quality once it reaches the destination, how the data is broadcasted in the network and uses a storage for packet information and once there is a node failure it studies why the failure occurs if due to a technical failure or incapacity of sever to process the request or the client to receive the data it is analysed and resend or if a deliberate rejection from client or some middle man acquiring packets or destination ip and the receivers ip don't match it drops the node and uses optimal grouping to decide the next best linking if a node is dropped to achieve the best throughput rather than a random selection of link and in turn making it very inefficient even for a small round-trip.
2005
We derive outage expressions and throughput bounds for wireless networks subject to different sources of nondeterminism. The degree of uncertainty is characterized by the location of the network in the uncertainty cube whose three axes represent the three main sources of uncertainty in interferencelimited networks: the node distribution, the channel gains, and the channel access. The range for the coordinates is [0,1], where 0 indicates complete determinism, and 1 a maximum degree of randomness (nodes distributed in a Poisson point process, fading with fading figure 1, and ALOHA channel access, respectively).
IEEE GLOBECOM 2007-2007 IEEE Global Telecommunications Conference, 2007
Page 1. Constrained Stochastic Games in Wireless Networks Eitan Altman ∗ , Konstantin Avratchenkov ∗ , Nicolas Bonneau ∗ , Mérouane Debbah , Rachid El-Azouzi , Daniel Sadoc Menasché § ∗ INRIA, Centre Sophia ...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.