The paper studies the problem of distributed average consensus in sensor networks with quantized data. We consider two versions of the algorithm: unbounded quantizers and bounded quantizers. To achieve consensus, dither (small noise) is added to the sensor states before quantization. We show by stochastic approximation techniques that consensus is asymptotically achieved to a finite random variable. We then study analytically the tradeoffs between how far away is this limiting random variable from the desired average, the consensus convergence rate, the quantizer parameters, and the network topology. We cast these tradeoff issues as an optimal quantizer design that we solve. A numerical study illustrates the design tradeoffs.