There are estimated to be more than a million Deaf and severely hard of hearing individuals living in the United States. For many of these individuals, American Sign Language (ASL) is their primary means of communication. However, for most day-to-day interactions, native-ASL users must either get by with a mixture of gestures and written communication in a non-native language or seek the assistance of an interpreter. Whereas advances towards automated translation between many other languages have benefited greatly from decades of research into speech recognition and Statistical Machine Translation, ASLs lack of aural and written components have limited exploration into automated translation of ASL. In this thesis, I focus on work towards recognizing components of American Sign Language in real-time. I first evaluate the suitability of a real-time depth-based generative hand tracking model for estimating ASL handshapes. I then present a study of ASL fingerspelling recognition, in which real-time tracking and classification methods are applied to continuous sign sequences. I will then discuss the future steps needed to expand a real-time fingerspelling recognition to the problem of general ASL recognition.