<p>Building artificial intelligence systems from a human-centered perspective is increasingly urgent, as large-scale machine learning systems ranging from personalized recommender systems to language and image generative models are deployed to interact with people daily. In this thesis, we propose a guideline for building these systems from a human-centered perspective. Our guideline contains three steps: (<em>i</em>) identifying the role of the people of interest and their core characteristics concerned in the learning task; (<em>ii</em>) modeling these characteristics in a useful and reliable manner; and (<em>iii</em>) incorporating these models into the design of learning algorithms in a principled way.</p>
<p>We ground this guideline in two applications: personalized recommender systems and decision-support systems. For recommender systems, we follow the guideline by (<em>i</em>) focusing on users’ evolving preferences, (<em>ii</em>) modeling them as dynamical systems, and (<em>iii</em>) developing efficient online learning algorithms with provable guarantees to interact with users sharing different preference dynamics. For decision-support systems, we (<em>i</em>) choose decision-makers’ risk preferences to be the core characteristics of concern, (<em>ii</em>) model them in the objective function of the system, and (<em>iii</em>) provide a general procedure with statistical guarantees for learning models under diverse risk preferences. We conclude by discussing the future of human-centered machine learning and the role of interdisciplinary research in this field. </p>