Electrical & Computer Engineering

Permanent URI for this communityhttp://hdl.handle.net/1903/2234

Browse

Search Results

Now showing 1 - 10 of 10
  • Item
    Representation Learning for Reinforcement Learning: Modeling Non-Gaussian Transition Probabilities with a Wasserstein Critic
    (2024) Tse, Ryan; Zhang, Kaiqing; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Reinforcement learning algorithms depend on effective state representations when solving complex, high-dimensional environments. Recent methods learn state representations using auxiliary objectives that aim to capture relationships between states that are behaviorally similar, meaning states that lead to similar future outcomes under optimal policies. These methods learn explicit probabilistic state transition models and compute distributional distances between state transition probabilities as part of their measure of behavioral similarity. This thesis presents a novel extension to several of these methods that directly learns the 1-Wasserstein distance between state transition distributions by exploiting the Kantorovich-Rubenstein duality. This method eliminates parametric assumptions about the state transition probabilities while providing a smoother estimator of distributional distances. Empirical evaluation demonstrates improved sample efficiency over some of the original methods and a modest increase in computational cost per sample. The results establish that relaxing theoretical assumptions about state transition modeling leads to more flexible and robust representation learning while maintaining strong performance characteristics.x
  • Item
    Learning in Large Multi-Agent Systems
    (2024) Kara, Semih; Martins, Nuno C; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In this dissertation, we study a framework of large-scale multi-agent strategic interactions. The agents are nondescript and use a learning rule to repeatedly revise their strategies based on their payoffs. Within this setting, our results are structured around three main themes: (i) Guaranteed learning of Nash equilibria, (ii) The inverse problem, i.e. estimating the payoff mechanism from the agents' strategy choices, and (iii) Applications to the placement of electric vehicle charging stations. In the traditional setup, the agents' inter-revision times follow identical and independent exponential distributions. We expand on this by allowing these intervals to depend on the agents' strategies or have Erlang distributions. These extensions enhance the framework's modeling capabilities, enabling it to address problems such as task allocation with varying service times or multiple stages. We also explore a third generalization, concerning the accessibility among strategies. Majority of the existing literature assume that the agents can transition between any two strategies, whereas we allow only certain alternatives to be accessible from certain others. This adjustment further improves the framework's modeling capabilities, such as by incorporating constraints on strategy switching related to spatial and informational factors. For all of these extensions, we use Lyapunov's method and passivity-based techniques to find conditions on the revision rates, learning rule, and payoff mechanism that ensure the agents learn to play a Nash equilibrium of the payoff mechanism. For our second class of problems, we adopt a multi-agent inverse reinforcement learning perspective. Here, we assume that the learning rule is known but, unlike in existing work, the payoff mechanism is unknown. We propose a method to estimate the unknown payoff mechanism from sample path observations of the populations' strategy profile. Our approach is two-fold: We estimate the agents' strategy transitioning probabilities, which we then use - along with the known learning rule - to obtain a payoff mechanism estimate. Our findings regarding the estimation of transitioning probabilities are general, while for the second step, we focus on linear payoff mechanisms and three well-known learning rules (Smith, replicator, and Brown-von Neumann-Nash). Additionally, under certain assumptions, we show that we can use the payoff mechanism estimate to predict the Nash equilibria of the unknown mechanism and forecast the strategy profile induced by other rules. Lastly, we contribute to a traffic simulation tool by integrating electric vehicles, their charging behaviors, and charging stations. This simulation tool is based on spatial-queueing principles and, although less detailed than some microscopic simulators, it runs much faster and accurately represents traffic rules. Using this tool, we identify optimal charging station locations (on real roadway networks) that minimize the overall traffic.
  • Item
    Mean-field Approaches in Multi-agent Systems: Learning and Control
    (2023) Tirumalai, Amoolya; Baras, John S; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In many settings in physics, chemistry, biology, and sociology, when individuals (particles) interact in large collectives, they begin to behave in emergent ways. This is to say that their collective behavior is altogether different from their individual behavior. In physics and chemistry, particles interact through the various forces, and this results in the rich behavior of the phases of matter. A particularly interesting case arises in the dynamics of gaseous star formation. In models of star formation, the gases are subject to the attractive gravitational force, and perhaps viscosity, electromagnetism, or thermal fluctuations. Depending on initial conditions, and inclusion of additional forces in the models, a variety of interesting configurations can arise, from dense nodules of gas to swirling vortices. In biology and sociology, these interactions (forces) can be explicitly tied to chemical or physical phenomena, as in the case of microbial chemotaxis, or they can be more abstract or virtual, as in the case of bird flocking or human pedestrian traffic. We focus on the latter cases in this work. In collective animal or human traffic, we do not say that animals or humans are explicity subject to physical forces that causes them to move in alignment with each other, or whatever else. Rather, they behave as if there were such forces. In short, we use the language and notation of physics and forces as a convenient tool to build our understanding. We do so since natural phenomena are rich with sophisticated and adaptive behavior. Bird flocks rapidly adapt to avoid collisions, to fly around obstacles, and to confuse predators. Engineers today can only dream of building drone swarms with such plasticity. An important question to answer is how one takes a model of interacting individuals and builds a model of a collective. Once one answers this question, another immediately follows: how do we take these models of collectives and use them to discover representations of natural phenomena? Then, can we use these models to build methods to control such phenomena, assuming suitable actuation? Once these questions are answered, our understanding of collective dynamics will improve, broadening the applications we can tackle. In this thesis, we study collective dynamics via mean-field theory. In mean-field theory, an individual is totally anonymous, and so can be removed or permuted from a large collective without changing the collective dynamics significantly. More specifically, when any individual is excluded from the definition of the empirical measure of all the individuals, those empirical measures converge to the same measure, termed the mean-field measure. The mean-field measure is governed by the forward Kolmogorov equation. In certain scenarios where an analogy can be drawn to particle dynamics, these forward Kolmogorov equations can be converted to compressible Euler equations. When optimal control problems are posed on the particle dynamics, in the mean-field limit we obtain a forward Kolmogorov equation coupled to a backward Hamilton-Jacobi-Bellman (-Isaacs) equation (or a stationary analogue of these). This system of equations describes the solution to the mean-field game. The first two problems we explore in this thesis are focused on the system identification (inverse) problem: discover a model of collective dynamics from data. In these problems, we study a generalized hydrodynamic Cucker-Smale-type model of flocking in a bounded region of 3D space. We first prove existence of weak bounded energy solutions and a weak-strong uniqueness principle for our model. Then, we use the model to learn a representation of the dynamics of data associated to a synthetic bird flock. The second two problems we study focus on the control (forward) problem: learn an approximately optimal control for collective dynamics online. We study this first in a relatively simple state-and-control-constrained mean-field game on traffic. In this case, the mean-field term is contained only in the mean-field game's cost. We first numerically study a finite horizon version of this problem. The approach for the first problem is not online. Then, we take an infinite horizon version, and we form a system of approximate dynamic programming ODE-PDEs from the exact dynamic programming PDEs. This approach results in online learning and adapting of the control to the dynamics. We prove this ODE-PDE system has a unique weak solution via semigroup and successive approximation methods. We present a numerical example, and discuss the tradeoffs in this approach. We conclude the thesis by summarizing our results, and discussing future directions and applications in theoretical and practical settings.
  • Item
    Control Theory-Inspired Acceleration of the Gradient-Descent Method: Centralized and Distributed
    (2022) Chakrabarti, Kushal; Chopra, Nikhil; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Mathematical optimization problems are prevalent across various disciplines in science and engineering. Particularly in electrical engineering, convex and non-convex optimization problems are well-known in signal processing, estimation, control, and machine learning research. In many of these contemporary applications, the data points are dispersed over several sources. Restrictions such as industrial competition, administrative regulations, and user privacy have motivated significant research on distributed optimization algorithms for solving such data-driven modeling problems. The traditional gradient-descent method can solve optimization problems with differentiable cost functions. However, the speed of convergence of the gradient-descent method and its accelerated variants is highly influenced by the conditioning of the optimization problem being solved. Specifically, when the cost is ill-conditioned, these methods (i) require many iterations to converge and (ii) are highly unstable against process noise. In this dissertation, we propose novel optimization algorithms, inspired by control-theoretic tools, that can significantly attenuate the influence of the problem's conditioning. First, we consider solving the linear regression problem in a distributed server-agent network. We propose the Iteratively Pre-conditioned Gradient-Descent (IPG) algorithm to mitigate the deleterious impact of the data points' conditioning on the convergence rate. We show that the IPG algorithm has an improved rate of convergence in comparison to both the classical and the accelerated gradient-descent methods. We further study the robustness of IPG against system noise and extend the idea of iterative pre-conditioning to stochastic settings, where the server updates the estimate based on a randomly selected data point at every iteration. In the same distributed environment, we present theoretical results on the local convergence of IPG for solving convex optimization problems. Next, we consider solving a system of linear equations in peer-to-peer multi-agent networks and propose a decentralized pre-conditioning technique. The proposed algorithm converges linearly, with an improved convergence rate than the decentralized gradient-descent. Considering the practical scenario where the computations performed by the agents are corrupted, or a communication delay exists between them, we study the robustness guarantee of the proposed algorithm and a variant of it. We apply the proposed algorithm for solving decentralized state estimation problems. Further, we develop a generic framework for adaptive gradient methods that solve non-convex optimization problems. Here, we model the adaptive gradient methods in a state-space framework, which allows us to exploit control-theoretic methodology in analyzing Adam and its prominent variants. We then utilize the classical transfer function paradigm to propose new variants of a few existing adaptive gradient methods. Applications on benchmark machine learning tasks demonstrate our proposed algorithms' efficiency. Our findings suggest further exploration of the existing tools from control theory in complex machine learning problems. The dissertation is concluded by showing that the potential in the previously mentioned idea of IPG goes beyond solving generic optimization problems through the development of a novel distributed beamforming algorithm and a novel observer for nonlinear dynamical systems, where IPG's robustness serves as a foundation in our designs. The proposed IPG for distributed beamforming (IPG-DB) facilitates a rapid establishment of communication links with far-field targets while jamming potential adversaries without assuming any feedback from the receivers, subject to unknown multipath fading in realistic environments. The proposed IPG observer utilizes a non-symmetric pre-conditioner, like IPG, as an approximation of the observability mapping's inverse Jacobian such that it asymptotically replicates the Newton observer with an additional advantage of enhanced robustness against measurement noise. Empirical results are presented, demonstrating both of these methods' efficiency compared to the existing methodologies.
  • Item
    ESTIMATION AND CONTROL OF NONLINEAR SYSTEMS: MODEL-BASED AND MODEL-FREE APPROACHES
    (2020) Goswami, Debdipta; Paley, Derek A.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    State estimation and subsequent controller design for a general nonlinear system is an important problem that have been studied over the past decades. Many applications, e.g., atmospheric and oceanic sampling or lift control of an airfoil, display strongly nonlinear dynamics with very high dimensionality. Some of these applications use smaller underwater or aerial sensing platforms with insufficient on-board computation power to use a Monte-Carlo approach of particle filters. Hence, they need a computationally efficient filtering method for state-estimation without a severe penalty on the performance. On the other hand, the difficulty of obtaining a reliable model of the underlying system, e.g., a high-dimensional fluid dynamical environment or vehicle flow in a complex traffic network, calls for the design of a data-driven estimation and controller when abundant measurements are present from a variety of sensors. This dissertation places these problems in two broad categories: model-based and model-free estimation and output feedback. In the first part of the dissertation, a semi-parametric method with Gaussian mixture model (GMM) is used to approximate the unknown density of states. Then a Kalman filter and its nonlinear variants are employed to propagate and update each Gaussian mode with a Bayesian update rule. The linear observation model permits a Kalman filter covariance update for each Gaussian mode. The estimation error is shown to be stochastically bounded and this is illustrated numerically. The estimate is used in an observer-based feedback control to stabilize a general closed-loop system. A transferoperator- based approach is then proposed for the motion update for Bayesian filtering of a nonlinear system. A finite-dimensional approximation of the Perron-Frobenius (PF) operator yields a method called constrained Ulam dynamic mode decomposition (CUDMD). This algorithm is applied for output feedback of a pitching airfoil in unsteady flow. For the second part, an echo-state network (ESN) based approach equipped with an ensemble Kalman filter is proposed for data-driven estimation of a nonlinear system from a time series. A random reservoir of recurrent neural connections with the echo-state property (ESP) is trained from a time-series data. It is then used as a model-predictor for an ensemble Kalman filter for sparse estimation. The proposed data-driven estimation method is applied to predict the traffic flow from a set of mobility data of the UMD campus. A data-driven model-identification and controller design is also developed for control-affine nonlinear systems that are ubiquitous in several aerospace applications. We seek to find an approximate linear/bilinear representation of these nonlinear systems from data using the extended dynamic mode decomposition algorithm (EDMD) and apply Liealgebraic methods to analyze the controllability and design a controller. The proposed method utilizes the Koopman canonical transform (KCT) to approximate the dynamics into a bilinear system (Koopman bilinear form) under certain assumptions. The accuracy of this approximation is then analytically justified with the universal approximation property of the Koopman eigenfunctions. The resulting bilinear system is then subjected to controllability analysis using the Myhill semigroup and Lie algebraic structures, and a fixed endpoint optimal controller is designed using the Pontryagin’s principle.
  • Item
    Optimality, Synthesis and a Continuum Model for Collective Motion
    (2019) Halder, Udit; Krishnaprasad, Perinkulam S.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    It is of importance to study biological collectives and apply the wisdom so accrued to modern day engineering problems. In this dissertation we attempt to gain insight into collective behavior where the main contribution is twofold. First, a `bottom-up' approach is employed to study individual level control law synthesis and emergence thereby of collective behavior. Three different problems, involving single and multiple agents, are studied by both analytical and experimental means. These problems arise from either a practical viewpoint or from attempts at describing biologically plausible feedback mechanisms. One result obtained in this context for a double agent scenario is that under a particular constant bearing pursuit strategy, the problem exhibits certain features common with the Kepler two body problem. Laboratory demonstrations of the solutions to these problems are presented. It is to be noted that these types of individual level control problems can help understand and construct building blocks for group level behaviors. The second approach is `top-down' in nature. It treats a collective as a whole and asks if its movement minimizes some kind of energy functional. A key goal of this work is to develop wave equations and their solutions for a natural class of optimal control problems with which one can analyze information transfer in flocks. Controllability arguments in infinite dimensional spaces give strong support to construct solutions for such optimal control problems. Since the optimal control problems are infinite dimensional in the state space and one cannot simply expect Pontryagin's Maximum Principle (PMP) to apply in such a setting, the work has required care and attention to functional analytic considerations. In this work, it is shown that under a certain assumption on finite co-dimensionality of a reachable set, PMP remains valid. This assumption is then shown to hold true for the case of a specific ensemble of agents, each with state space as the Heisenberg group H(3). Moreover, analysis of optimal controls demonstrates the existence of traveling wave solutions in that setting. Synchronization results are obtained in a high coupling limit where deviation from neighbors is too costly for every agent. The combination of approaches based on PMP and calculus of variations have been fruitful in developing a solid new understanding of wave phenomena in collectives. We provide partial results along these lines for the case of a continuum of planar agents (SE(2) case). Finally, a different top-down and data-driven approach to analyze collective behavior is also put forward in this thesis. It is known that the total kinetic energy of a flock can be divided into several modes attributed to rigid-body translations, rotations, volume changes, etc. Flight recordings of multiple events of European starling flocks yield time-signals of these different energy modes. This approach then seeks an explanation of kinetic energy mode distributions (viewed as flock-scale decisions) by appealing to techniques from evolutionary game theory and optimal control theory. We propose the notion of cognitive cost that calculates a suitably defined action functional and measures the cost to an event, resulting from temporal variations of energy mode distributions.
  • Item
    CONTROLLER SYNTHESIS UNDER INFORMATION AND FINITE-TIME LOGICAL CONSTRAINTS
    (2018) Maity, Dipankar; Baras, John S; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In robotics, networks, and many related fields, a typical controller design problem needs to address both logical and informational constraints. The logical constraints may arise due to the complex task description or decision making process, while the information-related constraints emerge naturally as a consequence of the limitations on communication and computation capabilities. In the first part of the thesis, we consider the problem of synthesizing an event-based controller to address the information-related constraints in the controller design. We consider dynamical systems that are operating under continuous state feedback. This assumes that the measurements are continuously transmitted to the controller in order to generate the input and thus, increases the cost of communication by requiring huge communication resources. In many situations, it so happens that the measurement does not change fast enough that continuous transmission is required. Taking motivation from this, we consider the case where instead continuous feedback we seek an intermittent-feedback. As a result, the system trajectory will deviate from its ideal behavior. However, the question is how much would it deviate? Given the allowed bound on this deviation, can we design some controller that requires fewer measurements than the original controller and still manages to keep the deviation within this prescribed bound? Two important questions remain: 1) What will be the structure of the (optimal) controller? 2) How will the system know the (optimal) instances to transmit the measurement? When the system sends out measurement to controller, it is called as an ``event". Thus, we are looking for an event-generator and a controller to perform event-based control under the constraints on the availability of the state information. The next part focuses on controller synthesis problems that have logical, spatio-temporal constraints on the trajectory of the system; a robot motion planning problem fits as a good example of these kind of finite-time logically constrained problems. We adopt an automata-based approach to abstract the motion of the robot into an automata, and verify the satisfaction of the logical constraints on this automata. The abstraction of the dynamics of the robot into an automata is based on certain reachability guarantee of the robot's dynamics. The controller synthesis problem over the abstracted automata can be represented as a shortest-path-problem. In part III, we consider the problem of jointly addressing the logical and information constraints. The problem is approached with the notion of robustness of logical constraints. We propose two different frameworks for this problem with two different notions of robustness and two different approaches for the controller synthesis. One framework relies on the abstraction of the dynamical systems into a finite transition system, whereas the other relies on tools and results from prescribed performance control to design continuous feedback control to satisfy the robust logical constraints. We adopt an hierarchical controller synthesis method where a continuous feedback controller is designed to satisfy the (robust) logical constraints, and later, that controller is replaced by a suitable event-triggered intermittent feedback controller to cope with informational constraints.
  • Item
    Optimal Control of Heat Engines in Non-equilibrium Statistical Mechanics
    (2017) HUANG, YUNLONG; KRISHNAPRASAD, PERINKULAM S; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    A heat engine is a cyclically operated statistical mechanical system which converts heat supply from a heat bath into mechanical work. The heat engine is operated by varying the system parameter. As it is operated in finite time, this non-equilibrium statistical mechanical system is a dissipative system. In this dissertation, our research focuses on two heat engines: one is a stochastic oscillator and the other is a capacitor connected to a Nyquist-Johnson resistor (a stochastically driven resistor-capacitor circuit). In the stochastic oscillator, by varying the stiffness of the potential well, the system can convert heat to mechanical work. In the resistor-capacitor circuit, the output of mechanical work is due to the change of the capacitance of the capacitor. These two heat engines are parametrically-controlled. A path in the parameter space of a heat engine is termed as a protocol. In the first chapter of this dissertation, under the near-equilibrium assumption, with the help of linear response theory, fluctuation theorem and stochastic thermo- dynamics, we consider an inverse diffusion tensor in the parameter space of a heat engine. The inverse diffusion tensor of the stochastic oscillator induces a hyperbolic space structure in the parameter space composed of the stiffness of the potential well and the inverse temperature of the heat bath. The inverse diffusion tensor of the resistor-capacitor circuit induces a Euclidean space structure in the parameter space composed of the capacitance of the capacitor and the inverse temperature of the heat bath. The average dissipation rate of a heat engine is given by a quadratic form (with a positive-definite inverse diffusion tensor) on the tangent space of the system parameter. Along a finite-time protocol of a heat engine, besides the energy dissipation, there are two auxiliary quantities of interest: one is the extracted work of the heat engine and the other is the total heat supply from the bath to the engine. These two quantities are fundamental to the analysis of the efficiency of a heat engine. In Chapter 2, combining the energy dissipation and the extracted work of a heat engine, we introduce sub-Riemannian geometry structures underlying both heat engines. In Chapter 3, after defining efficiency of a heat engine, we show the equivalence between an optimal control problem in the sub-Riemannian geometry of the heat engine and the problem of maximizing the efficiency of the heat engine. In this way, we bring geometric control theory to non-equilibrium statistical mechanics. In particular, we explicate the relation between conjugate point theory and the working loops of a heat engine. As a related calculation, we solve the isoperimetric problem in hyperbolic space as an optimal control problem in Chapter 4. Based on the theoretical analysis in the first four chapters, in the final chap- ter of the dissertation, we adopt level set methods, mid-point approximation and shooting method to design maximum-efficiency working loops of both heat engines. The associated efficiencies of these protocols are computed.
  • Item
    ADVENTURES ON NETWORKS: DEGREES AND GAMES
    (2015) Pal, Siddharth; Makowski, Armand; La, Richard; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    A network consists of a set of nodes and edges with the edges representing pairwise connections between nodes. Examples of real-world networks include the Internet, the World Wide Web, social networks and transportation networks often modeled as random graphs. In the first half of this thesis, we explore the degree distributions of such random graphs. In homogeneous networks or graphs, the behavior of the (generic) degree of a single node is often thought to reflect the degree distribution of the graph defined as the usual fractions of nodes with given degree. To study this preconceived notion, we introduce a general framework to discuss the conditions under which these two degree distributions coincide asymptotically in large random networks. Although Erdos-Renyi graphs along with other well known random graph models satisfy the aforementioned conditions, we show that there might be homogeneous random graphs for which such a conclusion may fail to hold. A counterexample to this common notion is found in the class of random threshold graphs. An implication of this finding is that random threshold graphs cannot be used as a substitute to the Barabasi-Albert model for scale-free network modeling, as proposed in some works. Since the Barabasi-Albert model was proposed, other network growth models were introduced that were shown to generate scale-free networks. We study one such basic network growth model, called the fitness model, which captures the inherent attributes of individual nodes through fitness values (drawn from a fitness distribution) that influence network growth. We characterize the tail of the network-wide degree distribution through the fitness distribution and demonstrate that the fitness model is indeed richer than the Barabasi-Albert model, in that it is capable of producing power-law degree distributions with varying parameters along with other non-Poisson degree distributions. In the second half of the thesis, we look at the interactions between nodes in a game-theoretic setting. As an example, these nodes could represent interacting agents making decisions over time while the edges represent the dependence of their payoffs on the decisions taken by other nodes. We study learning rules that could be adopted by the agents so that the entire system of agents reaches a desired operating point in various scenarios motivated by practical concerns facing engineering systems. For our analysis, we abstract out the network and represent the problem in the strategic-form repeated game setting. We consider two classes of learning rules -- a class of better-reply rules and a new class of rules, which we call, the class of monitoring rules. Motivated by practical concerns, we first consider a scenario in which agents revise their actions asynchronously based on delayed payoff information. We prove that, under the better-reply rules (when certain mild assumptions hold), the action profiles played by the agents converge almost surely to a pure-strategy Nash equilibrium (PSNE) with finite expected convergence time in a large class of games called generalized weakly acyclic games (GWAGs). A similar result is shown to hold for the monitoring rules in GWAGs and also in games satisfying a payoff interdependency structure. Secondly, we investigate a scenario in which the payoff information is unreliable, causing agents to make erroneous decisions occasionally. When the agents follow the better-reply rules and the payoff information becomes more accurate over time, we demonstrate the agents will play a PSNE with probability tending to one in GWAGs. Under a similar setting, when the agents follow the monitoring rule, we show that the action profile weakly converges to certain characterizable PSNE(s). Finally, we study a scenario where an agent might erroneously execute an intended action from time to time. Under such a setting, we show that the monitoring rules ensure that the system reaches PSNE(s) which are resilient to deviations by potentially multiple agents.
  • Item
    Adaptive Sensing and Processing for Some Computer Vision Problems
    (2014) Warnell, Garrett; Chellappa, Rama; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation is concerned with adaptive sensing and processing in computer vision, specifically through the application of computer vision techniques to non-standard sensors. In the first part, we adapt techniques designed to solve the classical computer vision problem of gradient-based surface reconstruction to the problem of phase unwrapping that presents itself in applications such as interferometric synthetic aperture radar. Specifically, we propose a new formulation of and solution to the classical two-dimensional phase unwrapping problem. As is usually done, we use the wrapped principal phase gradient field as a measurement of the absolute phase gradient field. Since this model rarely holds in practice, we explicitly enforce integrability of the gradient measurements through a sparse error-correction model. Using a novel energy-minimization functional, we formulate the phase unwrapping task as a generalized lasso problem. We then jointly estimate the absolute phase and the sparse measurement errors using the alternating direction method of multipliers (ADMM) algorithm. Using an interferometric synthetic aperture radar noise model, we evaluate our technique for several synthetic surfaces and compare the results to recently-proposed phase unwrapping techniques. Our method applies new ideas from convex optimization and sparse regularization to this well-studied problem. In the second part, we consider the problem of controlling and processing measurements from a non-traditional, compressive sensing (CS) camera in real time. We focus on how to control the number of measurements it acquires such that this number remains proportional to the amount of foreground information currently present in the scene under observations. To this end, we provide two novel adaptive-rate CS strategies for sparse, time-varying signals using side information. The first method utilizes extra cross-validation measurements, and the second exploits extra low-resolution measurements. Unlike the majority of current CS techniques, we do not assume that we know an upper bound on the number of significant coefficients pertaining to the images that comprise the video sequence. Instead, we use the side information to predict this quantity for each upcoming image. Our techniques specify a fixed number of spatially-multiplexed CS measurements to acquire, and they adjust this quantity from image to image. Our strategies are developed in the specific context of background subtraction for surveillance video, and we experimentally validate the proposed methods on real video sequences. Finally, we consider a problem motivated by the application of active pan-tilt-zoom (PTZ) camera control in response to visual saliency. We extend the classical notion of this concept to multi-image data collected using a stationary PTZ camera by requiring consistency: the property that each saliency map in the set of those that are generated should assign the same saliency value to distinct regions of the environment that appear in more than one image. We show that processing each image independently will often fail to provide a consistent measure of saliency, and that using an image mosaic to quantify saliency suffers from several drawbacks. We then propose ray saliency: a mosaic-free method for calculating a consistent measure of bottom-up saliency. Experimental results demonstrating the effectiveness of the proposed approach are presented.