<p dir="ltr">Evasion attacks perturb inputs of machine-learning models to induce undesired behaviors. Existing metrics commonly evaluate the risks of evasion attacks by untargeted robustness, and these metrics do not correspond to many practical adversarial goals. As a mitigation, we propose customized robustness, a general framework of robustness definitions that corresponds to specific use cases. With such a framework, we identify new definitions of robustness that remain unexplored by existing work, including but not limited to robustness that involves multiple input or output instances and robustness that involves multiple models. We further explore new threat models with these novel definitions, and invent new metrics that better capture the risks. Our new definitions also motivate stronger and more efficient tools to assess robustness in many real world use cases, including various loss functions that more accurately capture adversary goals.</p>