The Internet of Things brings about many applications where networks of devices collectively gather and process data for insight. The devices, however, are vulnerable to cyber attacks that compromise their processing capabilities and jeopardize their objectives. This thesis studies secure distributed inference in adversarial settings. We focus on distributed estimation, where networked entities measure an unknown parameter – for example, a team of robots sensing an unknown environment, or a network of smart meters collecting data about the electricity grid – and process their local measurements and information obtained from neighboring devices to estimate its value. Powerful adversaries manipulate the devices’ processing and communication protocols and compromise the data that they collect. Through coordinated cyber-attacks, the adversary may arbitrarily control the behavior of some of the devices. Without proper countermeasures, these attacks propagate throughout the network and lead to dangerous and catastrophic outcomes. In this thesis, we design resilient distributed estimation algorithms that mitigate the effect of cyber-attacks. We propose a distributed method to explicitly detect adversarial communications between devices. The detector alerts devices to malicious information they receive from their neighbors and prevents malicious entities from misleading the estimation procedure. Further, we develop a resilient distributed estimator that resists measurement attacks, cyberattacks that manipulate devices’ measurements of the unknown parameter. Our resilient estimator ensures that, even though some measurements are pathologically altered, through cooperation, all of the devices consistently estimate the unknown parameter. The algorithms we design in this thesis establish performance guarantees for distributed estimation in adversarial settings. Finally, we illustrate the performance of our algorithms and verify our theoretical results through simulation examples.