Carnegie Mellon University
Browse
marcinie_phd_phil_2023.pdf (1.17 MB)

Reframing Privacy in the Digital Age: The Shift to Data Use Control

Download (1.17 MB)
thesis
posted on 2023-09-12, 19:22 authored by Maria Arciniegas GomeMaria Arciniegas Gome

This dissertation is concerned with privacy: specifically, vulnerabilities to harm that stem from infringements on individuals’ privacy that are unique to or exacerbated by the modern power of predictive algorithms in the contemporary digital landscape. For one ubiquitous example, consider how facial recognition technology has evolved over the past decade and fundamentally altered the sense in which our personal image is exposed, searchable, and manipulable in digital spaces. Modern algorithms are capable, based on relatively few data points (often in the form of photos freely uploaded to online albums), of identifying individuals in pictures in a variety of contexts—in different settings, from different angles, even at vastly different ages, sometimes including baby pictures! Relatedly, reverse image search is now quite an effective tool, in many cases allowing anyone with access to easily ascertain someone’s identity from a single photograph. And of course, image manipulation has progressed by leaps and bounds in recent years, approaching a point where predictive algorithms can, for instance, generate false but eerily accurate portrayals of people in situations they may never have actually been in.

Our point of departure is a conceptual argument centered on the “blurring” of two previously distinct conceptual categories—observations vs. guesses—and the moral ramifications of this blurring. The arguments we make are intended not only to be responsive to the current state of technological development, but also to be forward-looking, taking into account the possible trajectories of these new technologies and how they may impact our lives as they become ever more accurate and widespread. For another small example: it is already the case that algorithmic tools exist to compile “personality profiles” of, say, job applicants based on their publicly available digital footprint—social media posts, images, “likes”, purchase histories, etc. Is this an invasion of privacy? Is the answer to this question dependent on how accurate they are? It has, of course, always been the case that job applicants are assessed and appraised by their (human!) interviewers; however, there are moral norms that divide fair appraisals from invasive ones. Even if this division is not entirely sharp, there are clear cases of transgression—for instance, few would support a potential employer’s “right” to hire an investigator to follow an applicant around for weeks, or to access their private records (personal correspondence, medical history, diary entries, etc.). What happens as algorithmic predictive tools approach the point of functionally simulating such morally transgressive cases? This is just one example of the type of “blurring” that concerns us in this dissertation. Our analysis extends from here to explore and clarify the nature and scope of the corresponding moral ramifications, providing a framework for tackling questions of design, implementation, and policy that ought to govern these increasingly ubiquitous technologies. 

In Chapter 1 we begin by setting the stage for the special problem of privacy in the digital age. As noted, we argue that this stems primarily from the modern power of predictive algorithms to “blur” the distinction between guessing and observing. At human-scale levels of reasoning, this distinction is sound and morally salient, typically marking the difference between (mostly) harmless (and inaccurate) speculation versus direct invasion of privacy. But what happens when machines are capable of making highly sophisticated guesses that can functionally (in terms of accuracy, detail, etc.) emulate direct observation? The moral relevance of this distinction as a binary is thereby upended, and consequently, the practical, social, and legal standards for what counts as “privacy” must be re-examined. 

Privacy is a fundamental concept supporting individual autonomy and well-being along multiple dimensions; ignoring paradigm shifts in our notion of privacy therefore risks deep and widespread harms. Deepfake pornography, for instance, can cause severe emotional distress and even economic damage to its victims. And how should potential job applicants adapt to a landscape in which their every action may contribute to a “personality profile” that determines their suitability as an employee? Perhaps that social media post asking for therapist recommendations was a bad strategic decision, in this light. 

Our analysis in Chapter 1 continues by challenging the suitability of “data control” approaches to safeguarding privacy. Briefly: this is the idea that the proper way to protect individuals from digital abuses of the sort described above is by requiring their explicit consent to release various data—clicking “agree” to the privacy policies of your social media platforms, for example, and explicitly managing what aspects of your data you are willing to share and what you aren’t. Essentially, we argue that the blurring described above more and more makes the prospect of this sort of protection-through-data-control impractical. The natural fix—and the line of attack that we pursue throughout the thesis—is a shift to constraining/regulating uses of data rather than its mere possession. This is a complex topic that we return to again and again; at this stage we lay out an important distinction between “question-answering” versus “action-guiding” senses of data use, discuss some of the challenges and complexities of regulating use in this context, and provide some concrete examples of policy and regulatory structures that might support such an approach. 

In Chapter 2 we dig deeper into the “blurring” discussed in Chapter 1, specifically focusing on cases where the collision between guessing and observing is only “partial” or “intermediate”—we have many algorithms that are better at guessing than humans, but still fall short of functionally emulating direct observation. To what extent do the arguments and policy implications considered in Chapter 1 still apply in such intermediate collision cases? Naturally, we contend that our arguments are still highly applicable, though our reasoning must be refined. We begin by distinguishing a spectrum of intermediate cases based, loosely speaking, on different ways that an algorithm’s “actual” accuracy may or may not match its “perceived” accuracy. Of course, these concepts are fleshed out in the chapter itself. These distinctions in turn help us frame the fundamental concepts of informational harm versus presentational harm that we introduce subsequently: that is, roughly, harms that stem from some factual information becoming known, versus harms that are rooted in being presented in some way (whether it is factual or not), respectively. This framework allows us to clearly articulate the privacy-related harms that come along with even “intermediate” cases, and thus broaden and generalize our argument to this much larger domain.

Finally, in Chapter 3 we return to the deep but crucial challenge of actually addressing the many privacy concerns we have raised previously. The concept of consent is key here since it often plays a morally transformative role in such cases—making the difference between intrusion and invitation. Unfortunately, “individual consent” is extremely tricky in this context, for several overlapping reasons. Following a long tradition in bioethics, we contend that the value and transformative nature of consent relies on it being both informed and uncoerced, and there are myriad ways for these necessary conditions to fail in modern digital landscapes. We explore many aspects of these failures and conclude that highly individualized conceptions or practical implementations of consent are not truly viable here. This, combined with the fundamental shift to use control motivated in the previous chapters, leads us to an exploration of regulatory structures that are both socially distributed and focused on data uses rather than data ownership, shifting the burden of responsibility from the individual to social structures of governance, with which we conclude. 

Privacy underpins our ability to function, grow, and thrive as human beings. Many of the major technological advances we are witnessing in our lifetime are or have the potential to drastically impact our most fundamental conceptions of what privacy is and our ability to preserve it. Of course, this does not mean that these advancements are “bad” in themselves—on the contrary, some may support unprecedented improvements to our quality of life. However, in order to reap such benefits, we cannot disregard the dangers. My thesis is that a shift to a “use control” paradigm for privacy protection and management, paired with regulatory structures that shift the burden from individuals to institutions, will be essential to adapting and preserving our core notions of privacy. The framework I argue for intersects many social and policy dimensions; my hope is that this work advances the discussion and helps lay some of the crucial conceptual groundwork for a thoughtful and effective implementation. 

History

Date

2023-08-01

Degree Type

  • Dissertation

Department

  • Philosophy

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

David Danks, Alex London

Usage metrics

    Categories

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC