Carla Griggio
Carla is a tenure-track assistant professor at the Department of Computer Science at Aalborg University in Copenhagen, where she researches and teaches Human-Computer Interaction. Her main research studies how communication technologies, especially messaging apps, affect interpersonal relationships, and how the way people use one platform interconnects with the way they use others. She conducts empirical studies to understand how people adapt software to fit their communication needs and builds prototypes that explore ways of providing them with richer control over their expression and online privacy. She currently manages a 4-year long research project about managing privacy and social boundaries in interoperable messaging platforms.
Interventions
This talk will introduce the Helen Nissenbaum's theory of Contextual Integrity as a framework for understanding privacy in messaging platforms. Contextual Integrity views privacy not as keeping information secret, but as making sure information flows in ways that match people’s expectations in a given context, or in other words, what feels appropriate to share, with whom, and for what purpose. For example, if Alice shares her live location with Bob through a messaging app, she likely expects the app to use her location only to deliver it to Bob. But if the app also uses her location to target ads, she may feel that her privacy was breached. The problem isn’t that the location was shared, but that it was shared in a way that didn’t match the context or her understanding of how the information would be used.
I will explain the theoretical framework with examples of how it can be adapted to identify and explain privacy expectations of particular messaging features, and discuss how it can be applied to interoperable messaging to identify potential privacy concerns.
Messaging platforms offer to protect user privacy via a variety of features, such as disappearing messages, password-protected chats, and end-to-end encryption (E2EE), which primarily protect message contents. Beyond such features, "untraceable communication" tools for instant messaging protect users from network attackers observing transport layer metadata, which can reveal who communicates with whom, when, and how often. However, unlike E2EE, the effectiveness of these tools depends on large anonymity sets, making widespread user adoption critical. This talk presents a research study with 189 users of messaging apps about their perceptions of "untraceability" as a concept, as well as their opinions on the widespread availability of tools for untraceability. The study explores their perceptions of "untraceability'' from a broad conceptual standpoint; rather than focusing on a particular tool or implementation, we analyze how users reason about what features should be incorporated by two fictitious messaging platforms, Texty and Chatty, to prevent third parties from "knowing who communicates with whom". The results point to a critical gap between how users and privacy experts understand untraceability, as well as tensions between users that see untraceability as a protection to individual privacy and users that see it as a threat to online safety and criminal accountability. Beyond untraceability, I discuss how this research is relevant to the design of messaging platforms that promote privacy as a central value.