Modeling, Quantifying, and Limiting Adversary Knowledge

dc.contributor.advisorHicks, Michaelen_US
dc.contributor.authorMardziel, Piotren_US
dc.contributor.departmentComputer Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2015-06-25T05:44:03Z
dc.date.available2015-06-25T05:44:03Z
dc.date.issued2015en_US
dc.description.abstractUsers participating in online services are required to relinquish control over potentially sensitive personal information, exposing them to intentional or unintentional miss-use of said information by the service providers. Users wishing to avoid this must either abstain from often extremely useful services, or provide false information which is usually contrary to the terms of service they must abide by. An attractive middle-ground alternative is to maintain control in the hands of the users and provide a mechanism with which information that is necessary for useful services can be queried. Users need not trust any external party in the management of their information but are now faced with the problem of judging when queries by service providers should be answered or when they should be refused due to revealing too much sensitive information. Judging query safety is difficult. Two queries may be benign in isolation but might reveal more than a user is comfortable with in combination. Additionally malicious adversaries who wish to learn more than allowed might query in a manner that attempts to hide the flows of sensitive information. Finally, users cannot rely on human inspection of queries due to its volume and the general lack of expertise. This thesis tackles the automation of query judgment, giving the self-reliant user a means with which to discern benign queries from dangerous or exploitive ones. The approach is based on explicit modeling and tracking of the knowledge of adversaries as they learn about a user through the queries they are allowed to observe. The approach quantifies the absolute risk a user is exposed, taking into account all the information that has been revealed already when determining to answer a query. Proposed techniques for approximate but sound probabilistic inference are used to tackle the tractability of the approach, letting the user tradeoff utility (in terms of the queries judged safe) and efficiency (in terms of the expense of knowledge tracking), while maintaining the guarantee that risk to the user is never underestimated. We apply the approach to settings where user data changes over time and settings where multiple users wish to pool their data to perform useful collaborative computations without revealing too much information. By addressing one of the major obstacles preventing the viability of personal information control, this work brings the attractive proposition closer to reality.en_US
dc.identifierhttps://doi.org/10.13016/M29327
dc.identifier.urihttp://hdl.handle.net/1903/16470
dc.language.isoenen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.subject.pquncontrolledabstract interpretationen_US
dc.subject.pquncontrolledprivacyen_US
dc.subject.pquncontrolledprobabilistic programmingen_US
dc.subject.pquncontrolledquantitative information flowen_US
dc.subject.pquncontrolledsecurityen_US
dc.titleModeling, Quantifying, and Limiting Adversary Knowledgeen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Mardziel_umd_0117E_15952.pdf
Size:
1.61 MB
Format:
Adobe Portable Document Format