<p> Wearable Cognitive Assistance (WCA) applications run on wearable mobile de-<br>
vices, to provide guidance for real world tasks. Physical assembly tasks have been a<br>
significant focus of research on WCA. We introduce new techniques to support the<br>
development of WCA applications for more complex assembly tasks than previous<br>
techniques supported. In addition, our work reduces the load on developers creating<br>
WCA applications by eliminating the need to collect and label real training images.<br>
We accomplish this by training computer vision models on synthetically generated<br>
images. This dissertation investigates escalation to human experts in cases when a<br>
user is not satisfied with the automated guidance from an application. Lastly, we<br>
develop a new version of a software framework for WCA applications, and evaluate<br>
ways in which WCA applications can benefit from running computations directly on<br>
mobile devices. </p>