Modeling the Adversary to Evaluate Password Strength With Limited Samples
In an effort to improve security by preventing users from picking weak passwords, system administrators set password-composition policies, sets of requirements that passwords must meet. Guidelines for such policies have been published by various groups, including the National Institute of Standards and Technology (NIST) in the United States, but this guidance has not been empirically verified. In fact, our research group and others have discovered it to be inaccurate. In this thesis, we provide an improved metric for evaluating the security of password-composition policies, compared to previous machine-learning approaches. We make several major contributions to passwords research. First, we develop a guess-calculator framework that automatically learns a model of adversary guessing from a training set of prior data mixed with samples, and applies this model to a set of test passwords. Second, we find several enhancements to the underlying grammar that increase the power of the learning algorithm and improve guessing efficiency over previous approaches. Third, we use the guesscalculator framework to study the guessability of passwords under various policies and provide methodological and statistical guidance for conducting these studies and analyzing the results. While much of this thesis focuses on an offline-attack threat model in which an adversary can make trillions of guesses, we also provide guidance on evaluating policies under an online-attack model, where the user can only make a small number of guesses before being locked out by the authentication system.
- Institute for Software Research
- Doctor of Philosophy (PhD)