A. James Clark School of Engineering
Permanent URI for this communityhttp://hdl.handle.net/1903/1654
The collections in this community comprise faculty research works, as well as graduate theses and dissertations.
Browse
2 results
Search Results
Item Computational Foundations for Safe and Efficient Human-Robot Collaboration in Assembly Cells(2016) Morato, Carlos W; Gupta, Satyandra K; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Human and robots have complementary strengths in performing assembly operations. Humans are very good at perception tasks in unstructured environments. They are able to recognize and locate a part from a box of miscellaneous parts. They are also very good at complex manipulation in tight spaces. The sensory characteristics of the humans, motor abilities, knowledge and skills give the humans the ability to react to unexpected situations and resolve problems quickly. In contrast, robots are very good at pick and place operations and highly repeatable in placement tasks. Robots can perform tasks at high speeds and still maintain precision in their operations. Robots can also operate for long periods of times. Robots are also very good at applying high forces and torques. Typically, robots are used in mass production. Small batch and custom production operations predominantly use manual labor. The high labor cost is making it difficult for small and medium manufacturers to remain cost competitive in high wage markets. These manufactures are mainly involved in small batch and custom production. They need to find a way to reduce the labor cost in assembly operations. Purely robotic cells will not be able to provide them the necessary flexibility. Creating hybrid cells where humans and robots can collaborate in close physical proximities is a potential solution. The underlying idea behind such cells is to decompose assembly operations into tasks such that humans and robots can collaborate by performing sub-tasks that are suitable for them. Realizing hybrid cells that enable effective human and robot collaboration is challenging. This dissertation addresses the following three computational issues involved in developing and utilizing hybrid assembly cells: - We should be able to automatically generate plans to operate hybrid assembly cells to ensure efficient cell operation. This requires generating feasible assembly sequences and instructions for robots and human operators, respectively. Automated planning poses the following two challenges. First, generating operation plans for complex assemblies is challenging. The complexity can come due to the combinatorial explosion caused by the size of the assembly or the complex paths needed to perform the assembly. Second, generating feasible plans requires accounting for robot and human motion constraints. The first objective of the dissertation is to develop the underlying computational foundations for automatically generating plans for the operation of hybrid cells. It addresses both assembly complexity and motion constraints issues. - The collaboration between humans and robots in the assembly cell will only be practical if human safety can be ensured during the assembly tasks that require collaboration between humans and robots. The second objective of the dissertation is to evaluate different options for real-time monitoring of the state of human operator with respect to the robot and develop strategies for taking appropriate measures to ensure human safety when the planned move by the robot may compromise the safety of the human operator. In order to be competitive in the market, the developed solution will have to include considerations about cost without significantly compromising quality. - In the envisioned hybrid cell, we will be relying on human operators to bring the part into the cell. If the human operator makes an error in selecting the part or fails to place it correctly, the robot will be unable to correctly perform the task assigned to it. If the error goes undetected, it can lead to a defective product and inefficiencies in the cell operation. The reason for human error can be either confusion due to poor quality instructions or human operator not paying adequate attention to the instructions. In order to ensure smooth and error-free operation of the cell, we will need to monitor the state of the assembly operations in the cell. The third objective of the dissertation is to identify and track parts in the cell and automatically generate instructions for taking corrective actions if a human operator deviates from the selected plan. Potential corrective actions may involve re-planning if it is possible to continue assembly from the current state. Corrective actions may also involve issuing warning and generating instructions to undo the current task.Item Bio-Inspired Small Field Perception for Navigation and Localization of MAV's in Cluttered Environments(2015) Escobar-Alvarez, Hector Domingo; Humbert, Sean J; Aerospace Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Insects are capable of agile pursuit of small targets while flying in complex cluttered environments. Additionally, insects are able to discern a moving background from smaller targets by combining their lightweight and fast vision system with efficient algorithms occurring in their neurons. On the other hand, engineering systems lack such capabilities since they either require large sensors, complex computations, or both. Bio-inspired small-field perception mechanisms have the potential to enhance the navigation of small unmanned aircraft systems in cluttered unknown environments. In this dissertation, we propose and investigate three methods to extract information about small-field objects from optic flow. The first method, \textit{flow of flow}, is analogous to processes taking place at the medulla level of the fruit-fly visuomotor system. The two other methods proposed are engineering approaches analogous to the figure-detection sensitive neurons at the lobula. All three methods employed demonstrated effective small-field information extraction from optic flow. The methods extract relative distance and azimuth location to the obstacles from an optic flow model. This optic flow model is based on parameterization of an environment containing small and wide-field obstacles. The three methodologies extract the high spatial frequency content of the optic flow by means of an elementary motion detector, Fourier series, and wavelet transforms, respectively. This extracted signal will contain the information about the small-field obstacles. The three methods were implemented on-board both a ground vehicle and an aerial vehicle to demonstrate and validate obstacle avoidance navigation in cluttered environments. Lastly, a localization framework based on wide field integration of nearness information (inverse of depth) is used for estimating vehicle navigation states in an unknown environment. Simulation of the localization framework demonstrates the ability to navigate to a target position using only nearness information.