A. James Clark School of Engineering
Permanent URI for this communityhttp://hdl.handle.net/1903/1654
The collections in this community comprise faculty research works, as well as graduate theses and dissertations.
Browse
4 results
Search Results
Item Hand Gesture Recognition Using EGaIn-Silicone Soft Sensors(MDPI, 2021-05-05) Shin, Sungtae; Yoon, Han UI; Yoo, ByungseokExploiting hand gestures for non-verbal communication has extraordinary potential in HCI. A data glove is an apparatus widely used to recognize hand gestures. To improve the functionality of the data glove, a highly stretchable and reliable signal-to-noise ratio sensor is indispensable. To do this, the study focused on the development of soft silicone microchannel sensors using a Eutectic Gallium-Indium (EGaIn) liquid metal alloy and a hand gesture recognition system via the proposed data glove using the soft sensor. The EGaIn-silicone sensor was uniquely designed to include two sensing channels to monitor the finger joint movements and to facilitate the EGaIn alloy injection into the meander-type microchannels. We recruited 15 participants to collect hand gesture dataset investigating 12 static hand gestures. The dataset was exploited to estimate the performance of the proposed data glove in hand gesture recognition. Additionally, six traditional classification algorithms were studied. From the results, a random forest shows the highest classification accuracy of 97.3% and a linear discriminant analysis shows the lowest accuracy of 87.4%. The non-linearity of the proposed sensor deteriorated the accuracy of LDA, however, the other classifiers adequately overcame it and performed high accuracies (>90%).Item Can You Do That Again? Time Series Consolidation as a Robust Method of Tailoring Gesture Recognition to Individual Users(MDPI, 2022-10-03) Dankovich, Louis J. IV; Vaughn-Cooke, Monifa; Bergbreiter, SarahRobust inter-session modeling of gestures is still an open learning challenge. A sleeve equipped with capacitive strap sensors was used to capture two gesture data sets from a convenience sample of eight subjects. Two pipelines were explored. In FILT a novel two-stage algorithm was introduced which uses an unsupervised learning algorithm to find samples representing gesture transitions and discards them prior to training and validating conventional models. In TSC a confusion matrix was used to automatically consolidate commonly confused class labels, resulting in a set of gestures tailored to an individual subject’s abilities. The inter-session testing accuracy using the Time Series Consolidation (TSC) method increased from a baseline inter-session average of 42.47 ± 3.83% to 93.02% ± 4.97% while retaining an average of 5.29 ± 0.46 out of the 11 possible gesture categories. These pipelines used classic machine learning algorithms which require relatively small amounts of data and computational power compared to deep learning solutions. These methods may also offer more flexibility in interface design for users suffering from handicaps limiting their manual dexterity or ability to reliably make gestures, and be possible to implement on edge devices with low computational power.Item Recognizing Visual Categories by Commonality and Diversity(2015) Choi, Jonghyun; Davis, Larry Steven; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Visual categories refer to categories of objects or scenes in the computer vision literature. Building a well-performing classifier for visual categories is challenging as it requires a high level of generalization as the categories have large within class variability. We present several methods to build generalizable classifiers for visual categories by exploiting commonality and diversity of labeled samples and the cat- egory definitions to improve category classification accuracy. First, we describe a method to discover and add unlabeled samples from auxil- iary sources to categories of interest for building better classifiers. In the literature, given a pool of unlabeled samples, the samples to be added are usually discovered based on low level visual signatures such as edge statistics or shape or color by an unsupervised or semi-supervised learning framework. This method is inexpensive as it does not require human intervention, but generally does not provide useful information for accuracy improvement as the selected samples are visually similar to the existing set of samples. The samples added by active learning, on the other hand, provide different visual aspects to categories and contribute to learning a better classifier, but are expensive as they need human labeling. To obtain high quality samples with less annotation cost, we present a method to discover and add samples from unlabeled image pools that are visually diverse but coherent to cat- egory definition by using higher level visual aspects, captured by a set of learned attributes. The method significantly improves the classification accuracy over the baselines without human intervention. Second, we describe now to learn an ensemble of classifiers that captures both commonly shared information and diversity among the training samples. To learn such ensemble classifiers, we first discover discriminative sub-categories of the la- beled samples for diversity. We then learn an ensemble of discriminative classifiers with a constraint that minimizes the rank of the stacked matrix of classifiers. The resulting set of classifiers both share the category-wide commonality and preserve diversity of subcategories. The proposed ensemble classifier improves recognition accuracy significantly over the baselines and state-of-the-art subcategory based en- semble classifiers, especially for the challenging categories. Third, we explore the commonality and diversity of semantic relationships of category definitions to improve classification accuracy in an efficient manner. Specif- ically, our classification model identifies the most helpful relational semantic queries to discriminatively refine the model by a small amount of semantic feedback in inter- active iterations. We improve the classification accuracy on challenging categories that have very small numbers of training samples via transferred knowledge from other related categories that have a lager number of training samples by solving a semantically constrained transfer learning optimization problem. Finally, we summarize ideas presented and discuss possible future work.Item Automated quantification and classification of human kidney microstructures obtained by optical coherence tomography(2009) Li, Qian; Chen, Yu; Bioengineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Optical coherence tomography (OCT) is a rapidly emerging imaging modality that can non-invasively provide cross-sectional, high-resolution images of tissue morphology such as kidney in situ and in real-time. Because the viability of a donor kidney is closely correlated with its tubular morphology, and a large amount of image datasets are expected when using OCT to scan the entire kidney, it is necessary to develop automated image analysis methods to quantify the spatially-resolved morphometric parameters such as tubular diameter, and to classify various microstructures. In this study, we imaged the human kidney in vitro, quantified the diameters of hollow structures such as blood vessels and uriniferous tubules, and classified those structures automatically. The quantification accuracy was validated. This work can enable studies to determine the clinical utility of OCT for kidney imaging, as well as studies to evaluate kidney morphology as a biomarker for assessing kidney's viability prior to transplantation.