ADVENTURES ON NETWORKS: DEGREES AND GAMES
MetadataShow full item record
A network consists of a set of nodes and edges with the edges representing pairwise connections between nodes. Examples of real-world networks include the Internet, the World Wide Web, social networks and transportation networks often modeled as random graphs. In the first half of this thesis, we explore the degree distributions of such random graphs. In homogeneous networks or graphs, the behavior of the (generic) degree of a single node is often thought to reflect the degree distribution of the graph defined as the usual fractions of nodes with given degree. To study this preconceived notion, we introduce a general framework to discuss the conditions under which these two degree distributions coincide asymptotically in large random networks. Although Erdos-Renyi graphs along with other well known random graph models satisfy the aforementioned conditions, we show that there might be homogeneous random graphs for which such a conclusion may fail to hold. A counterexample to this common notion is found in the class of random threshold graphs. An implication of this finding is that random threshold graphs cannot be used as a substitute to the Barabasi-Albert model for scale-free network modeling, as proposed in some works. Since the Barabasi-Albert model was proposed, other network growth models were introduced that were shown to generate scale-free networks. We study one such basic network growth model, called the fitness model, which captures the inherent attributes of individual nodes through fitness values (drawn from a fitness distribution) that influence network growth. We characterize the tail of the network-wide degree distribution through the fitness distribution and demonstrate that the fitness model is indeed richer than the Barabasi-Albert model, in that it is capable of producing power-law degree distributions with varying parameters along with other non-Poisson degree distributions. In the second half of the thesis, we look at the interactions between nodes in a game-theoretic setting. As an example, these nodes could represent interacting agents making decisions over time while the edges represent the dependence of their payoffs on the decisions taken by other nodes. We study learning rules that could be adopted by the agents so that the entire system of agents reaches a desired operating point in various scenarios motivated by practical concerns facing engineering systems. For our analysis, we abstract out the network and represent the problem in the strategic-form repeated game setting. We consider two classes of learning rules -- a class of better-reply rules and a new class of rules, which we call, the class of monitoring rules. Motivated by practical concerns, we first consider a scenario in which agents revise their actions asynchronously based on delayed payoff information. We prove that, under the better-reply rules (when certain mild assumptions hold), the action profiles played by the agents converge almost surely to a pure-strategy Nash equilibrium (PSNE) with finite expected convergence time in a large class of games called generalized weakly acyclic games (GWAGs). A similar result is shown to hold for the monitoring rules in GWAGs and also in games satisfying a payoff interdependency structure. Secondly, we investigate a scenario in which the payoff information is unreliable, causing agents to make erroneous decisions occasionally. When the agents follow the better-reply rules and the payoff information becomes more accurate over time, we demonstrate the agents will play a PSNE with probability tending to one in GWAGs. Under a similar setting, when the agents follow the monitoring rule, we show that the action profile weakly converges to certain characterizable PSNE(s). Finally, we study a scenario where an agent might erroneously execute an intended action from time to time. Under such a setting, we show that the monitoring rules ensure that the system reaches PSNE(s) which are resilient to deviations by potentially multiple agents.