Computer Science Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/2756

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    Item
    Computational Methods for Natural Walking in Virtual Reality
    (2024) Williams, Niall; Manocha, Dinesh; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Virtual reality (VR) allow users to feel as though they are really present in a computer-generated virtual environment (VE). A key component of an immersive virtual experience is the ability to interact with the VE, which includes the ability to explore the virtual environment. Exploration of VEs is usually not straightforward since the virtual environment is usually shaped differently than the user's physical environment. This can cause users to walk on virtual routes that correspond to physical routes that are obstructed by unseen physical objects or boundaries of the tracked physical space. In this dissertation, we develop new algorithms to understand how and enable people to explore large VEs using natural walking while incurring fewer collisions physical objects in their surroundings. Our methods leverage concepts of alignment between the physical and virtual spaces, robot motion planning, and statistical models of human visual perception. Through a series of user studies and simulations, we show that our algorithms enable users to explore large VEs with fewer collisions, allow us to predict the navigability of a pair of environments without collecting any locomotion data, and deepen our understanding of how human perception functions during locomotion in VR.
  • Thumbnail Image
    Item
    Minimal Perception: Enabling Autonomy on Resource-Constrained Robots
    (2023) Singh, Chahat Deep; Aloimonos, Yiannis; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Mobile robots are widely used and crucial in diverse fields due to their autonomous task performance. They enhance efficiency, and safety, and enable novel applications like precision agriculture, environmental monitoring, disaster management, and inspection. Perception plays a vital role in their autonomous behavior for environmental understanding and interaction. Perception in robots refers to their ability to gather, process, and interpret environmental data, enabling autonomous interactions. It facilitates navigation, object identification, and real-time reactions. By integrating perception, robots achieve onboard autonomy, operating without constant human intervention, even in remote or hazardous areas. This enhances adaptability and scalability. This thesis explores the challenge of developing autonomous systems for smaller robots used in precise tasks like confined space inspections and robot pollination. These robots face limitations in real-time perception due to computing, power, and sensing constraints. To address this, we draw inspiration from small organisms such as insects and hummingbirds, known for their sophisticated perception, navigation, and survival abilities despite their minimalistic sensory and neural systems. This research aims to provide insights into designing compact, efficient, and minimal perception systems for tiny autonomous robots. Embracing this minimalism is paramount in unlocking the full potential of tiny robots and enhancing their perception systems. By streamlining and simplifying their design and functionality, these compact robots can maximize efficiency and overcome limitations imposed by size constraints. In this work, a Minimal Perception framework is proposed that enables onboard autonomy in resource-constrained robots at scales (as small as a credit card) that were not possible before. Minimal perception refers to a simplified, efficient, and selective approach from both hardware and software perspectives to gather and process sensory information. Adopting a task-centric perspective allows for further refinement of the minimalist perception framework for tiny robots. For instance, certain animals like jumping spiders, measuring just 1/2 inch in length, demonstrate minimal perception capabilities through sparse vision facilitated by multiple eyes, enabling them to efficiently perceive their surroundings and capture prey with remarkable agility. This thesis introduces a cutting-edge exploration of the minimal perception framework, pushing the boundaries of robot autonomy to new heights. The contributions of this work can be summarized as follows:1. Utilizing minimal quantities such as uncertainty in optical flow and its untapped potential to enable autonomous navigation, static and dynamic obstacle avoidance, and the ability to fly through unknown gaps. 2. By utilizing the principles of interactive perception, the framework proposes novel object segmentation in cluttered environments eliminating the reliance on neural network training for object recognition. 3. Introducing a generative simulator called WorldGen that has the power to generate countless cities and petabytes of high-quality annotated data, designed to minimize the demanding need for laborious 3D modeling and annotations, thus unlocking unprecedented possibilities for perception and autonomy tasks. 4. Proposed a method to predict metric dense depth maps in never-seen or out-of-domain environments by fusing information from a traditional RGB camera and a sparse 64-pixel depth sensor. 5. The autonomous capabilities of the tiny robots are demonstrated on both aerial and ground robots: (a) autonomous car with a size smaller than a credit card (70mm), and (b) bee drone with a length of 120mm, showcasing navigation abilities, depth perception in all four main directions, and effective avoidance of both static and dynamic obstacles. In conclusion, the integration of the minimal perception framework in tiny mobile robots heralds a new era of possibilities, signaling a paradigm shift in unlocking their perception and autonomy potential. This thesis would serve as a transformative milestone that will reshape the landscape of mobile robot autonomy, ushering in a future where tiny robots operate synergistically in swarms, revolutionizing fields such as exploration, disaster response, and distributed sensing.
  • Thumbnail Image
    Item
    Active Vision Based Embodied-AI Design For Nano-UAV Autonomy
    (2021) Jagannatha Sanket, Nitin; Aloimonos, Yiannis; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The human fascination to mimic ultra-efficient flying beings like birds and bees hasled to a rapid rise in aerial robots in the recent decade. These aerial robots now posses a market share of over 10 Billion US Dollars. The future for aerial robots or Unmanned Aerial Vehicles (UAVs) which are commonly called drones is very bright because of their utility in a myriad of applications. I envision drones delivering packages to our homes, finding survivors in collapsed buildings, pollinating flowers, inspecting bridges, performing surveillance of cities, in sports and even as pets. In particular, quadrotors have become the go to platform for aerial robotics due to simplicity in their mechanical design, their vertical takeoff and landing capabilities and agility characteristics. Our eternal pursuit to improve drone safety and improve power efficiency has givenrise to the research and development of smaller yet smarter drones. Furthermore, smaller drones are more agile and task-distributable as swarms. Embodied Artificial Intelligence (AI) has been a big fuel to push this area further. Classically, the approach to designing such nano-drones possesses a strict distinction between perception, planning and control and relies on a 3D map of the scene that are used to plan paths that are executed by a control algorithm. On the contrary, nature’s never-ending quest to improve the efficiency of flyingagents through genetic evolution led to birds developing amazing eyes and brains tailored for agile flight in complex environments as a software and hardware co-design solution. In contrast, smaller flying agents such as insects that are at the other end of the size and computation spectrum adapted an ingenious approach – to utilize movement to gather more information. Early pioneers of robotics remarked at this observation and coined the concept of “Active Perception” which proposed that one can move in an exploratory way to gather more information to compensate for lack of computation and sensing. Such a controlled movement imposes additional constraints on the data being perceived to make the perception problem simpler. Inspired by this concept, in this thesis, I present a novel approach for algorithmicdesign on nano aerial robots (flying robots the size of a hummingbird) based on active perception by tightly coupling and combining perception, planning and control into sensorimotor loops using only on-board sensing and computation. This is done by re-imagining each aerial robot as a series of hierarchical sensorimotor loops where the higher ones require the inner ones such that resources and computation can be efficiently re-used. Activeness is presented and utilized in four different forms to enable large-scale autonomy at tight Size, Weight, Area and Power (SWAP) constraints not heard of before. The four forms of activeness are: 1. By moving the agent itself, 2. By employing an active sensor, 3. By moving a part of the agent’s body, 4. By hallucinating active movements. Next, to make this work practically applicable I show how hardware and software co-design can be performed to optimize the form of active perception to be used. Finally, I present the world’s first prototype of a RoboBeeHive that shows how to integrate multiple competences centered around active vision in all it’s glory. Following is a list of contributions of this thesis: • The world’s first functional prototype of a RoboBeeHive that can artificially pollinateflowers. • The first method that allows a quadrotor to fly through gaps of unknown shape,location and size using a single monocular camera with only on-board sensing and computation. • The first method to dodge dynamic obstacles of unknown shape, size and locationon a quadrotor using a monocular event camera. Our series of shallow neural networks are trained in simulation and transfers to the real world without any finetuning or re-training. • The first method to detect unmarked drones by detecting propellers. Our neuralnetwork is trained in simulation and transfers to the real world without any finetuning or re-training. • A method to adaptively change the baseline of a stereo camera system for quadrotornavigation. • The first method to introduce the usage of saliency to select features in a directvisual odometry pipeline. • A comprehensive benchmark of software and hardware for embodied AI whichwould serve as a blueprint for researchers and practitioners alike.