Show simple item record

dc.contributor.advisorKrishnaprasad, P.S.en_US
dc.contributor.authorBrubaker, Charlieen_US
dc.contributor.authorWojtkowski, Stephanieen_US
dc.date.accessioned2007-05-23T10:09:50Z
dc.date.available2007-05-23T10:09:50Z
dc.date.issued2000en_US
dc.identifier.urihttp://hdl.handle.net/1903/6156
dc.description.abstractIn this research, computer vision was used to locate a sound source for feedback into an audio system. The camera was first calibrated to determine the relationship between the world coordinates and the pixel coordinates of an object. <p>To aid in the calibration process, computer vision techniques such asgradient calculation and the Hough Transform were used to extract the calibration points from a series of images. These points, alongwith their corresponding world coordinates, were then used in Roger Tsai'scamera model to calibrate the camera. <p>The intrinsic and extrinsic camera parameters were then used to find the vector of the sound source in an image. Again, vision processing was used to extract the sound source from an image using red as a detectable feature. The largest red region was iolated, and the centroid of that region was used to mark the location of the sound source. <p>Finally, Tsai's model was used in reverse to find the vector in the world along which the camera lies.en_US
dc.format.extent192573 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoen_USen_US
dc.relation.ispartofseriesISR; UG 2000-2en_US
dc.subjectcomputer visionen_US
dc.subjectroboticsen_US
dc.subjectcamera calibrationen_US
dc.subjectSensor-Actuator Networksen_US
dc.titleUsing Computer Vision to Train a Sound Tracking Systemen_US
dc.typeThesisen_US
dc.contributor.departmentISRen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record