Municipalities around the world provide public access to their traffic surveillance cameras via a web-interface. Though such web-interfaces usually provide only low frame-rate and low resolution video, the diversity and the low cost of this data is an opportunity for researchers to work on problems ranging from finding empty parking spots in a street to traffic rules enforcement. Modern machine learning algorithms based on Convolutional Neural Networks (CNNs) are capable of accurately locating and analyzing each car in an image. However, CNNs are notoriously hungry for expensive labelled training data, which is only available to large companies. We aim to close this gap by incorporating synthetic computer-generated data into the training process. The purpose is to significantly reduce the required amount of labelled real data and reduce the cost of annotation per image, thus democratizing the practical usage of CNNs for the case of surveillance cameras. The contributions of this thesis are 1) a dataset of 8,000 labelled low-resolution images of individual cars and a dataset of 80,000 synthetic car images tailored for surveillance cameras. The images are annotated with pixel-level background mask, car type, color, and orientation; 2) a dataset of over 1,000 high-quality 3D CAD models of cars; 3) experiments that demonstrate the usefulness of synthetic images in detection and semantic segmentation tasks for surveillance camera scenarios; and 4) a dataset management tool that facilitates the work with annotations in computer vision. To sum up, the dissertation aims to promote synthetic images as a tool for machine learning and outline its advantages and limitations