Understanding and Improving Secure Development from a Human-Centered Perspective
Publication or External Link
Secure software development remains a difficult task, as is exemplified by the fact thatvulnerabilities are discovered in production code on a regular basis. Researchers in the computer security field have worked for many years to mitigate this problem through building better security tooling, creating secure programming languages, improving secure development processes, and improving educational interventions. The success of these interventions depends on both the technical attributes of the intervention and the human and organizational factors that impact adoption, usability, and efficacy, suggesting the importance of understanding both the technical and human and organizational factors that influence the success of these interventions. While there has been much past work exploring the technical factors, there has been little work exploring the human and organizational factors.
To attempt to close this gap, I start by exploring why and how developers introduce, find,and fix vulnerabilities as they build secure code. By performing in-depth qualitative analysis on data collected throughout an iteration of a secure programming competition, I uncover that teams with a detailed initial design tended to introduce fewer vulnerabilities than those without, and teams that worked on security steadily throughout building their codebase had fewer vulnerabilities. I also uncover that different types of vulnerabilities are discovered and fixed differently. Vulnerabilities that arose from simple programming mistakes tended to be found incidentally, although they were frequently found by testing for almost anything, related or not. They were often found by teams themselves, during their own build-phase testing (before attacks from other teams), and were typically easy to fix. In contrast, vulnerabilities related to misunderstanding security properties required deeper knowledge and more focused testing and were rarely found by build teams until they were exploited by other teams.
Next, I explore the adoption of current security development interventions by understandingthe benefits and drawbacks of adopting a secure programming language by using Rust as a case study. Through the use of interviews with professional developers that had adopted or attempted to adopt Rust and a survey with the broader Rust community, I highlighted a range of positive features, including good tooling and documentation, benefits for the development lifecycle, and improvement of overall secure coding skills, as well as drawbacks including a steep learning curve, limited library support, and concerns about the ability to hire additional Rust developers in the future. These results have implications for promoting the adoption of Rust specifically and secure programming languages and tools more generally.
Lastly, given the importance of understanding the human and organizational factors ofsecure software development, I explore alternate approaches to conducting these studies to improve validity and reduce stress on participants. Through a lab study, I explore the efficacy of tasking participants with reading code and identifying vulnerabilities or reading code and fixing vulnerabilities in code instead of tasking them with writing secure code from scratch. I find parallels in the functionality and security of results among the three conditions with key similarities in the types of vulnerabilities that developers introduce or fail to identify. Further, I demonstrate that participants in the read condition and the fix condition feel less negative effects such as frustration and time spent participating in the study. Our results suggest the possibility for using these methods as alternatives to writing code and avenues for future exploration.