Hi, I'm Leah!
I love bringing a philosophical lens to modern technology problems
I am a Ph.D. candidate in Computer Science at the University of Minnesota where I am advised by Stevie Chancellor and a member of the GroupLens research lab. My PhD research agenda is focused on understanding how taking a philosophical stance—marrying theory building with empirical substantiation—can rigorously inform more ethical data decisions. I mainly explore this in online communities and predictive systems. Inspired by my work as a yoga teacher, my guiding philosophy is to care for the communities whose data makes our research and technologies possible. CV Below:
LinkedIn | Google Scholar | ajman004[at]umn[dot]edu
Life Happenings!
PhD Lessons from Running and Escaping Rooms
The PhD students at GroupLens have a variety of hobbies! From knitting to playing video games, we all have non-research activities that contribute to our lives. In this article, we asked two PhD st…
grouplens.org
CSpotlight: The Power Of Research
Name: Leah AjmaniPronouns: She/HerHometown: Short Hills, New JerseyProgram: Ph.D. Computer ScienceGraduation year: Spring 2026Interests: Yoga, rock climbing, playing with her dog Why did you choose to pursue a degree in computer science specifically at the University of Minnesota?My undergraduate degrees at Cornell University are in computer science and philosophy. I've always loved STEM growing up, as well as things like creative writing and reading. It wasn't until I got to college that I realized a lot of the foundations of computer science are very theoretical in different ways of thinking and solving problems. That is how I stuck with computer science but also incorporated philosophy to be more creative. My research mentor, Brian McInnis, connected me with my current advisor, Stevie Chancellor. Stevie and I bonded over our research ideas. My research is on the ethics of machine learning and artificial intelligence (AI). It used to be a niche topic, but now it's popular. My research agenda is very intersectional because it involves merging my philosophy and computer science studies, and Stevie just got it. That's why I moved here.How did you become interested in computer science? What are your specific interests within the field?A lot of the research I do is about taking concepts from philosophy and seeing how we can apply them to emerging problems in computational sciences, specifically in predictive systems, such as systems that use machine learning or AI methods. My work tries to apply them to very high-stakes problems, like detecting suicidality or detecting recidivism risks. The idea behind my research is that if we can use what we know about ethics and philosophy and apply them to these very specific problems, we can prevent a lot of the harm we are seeing around data rates, fairness, and responsible AI.Congratulations on earning the Computer Science Fellowship! How will this scholarship impact your academic and extracurricular work?I’m really lucky to have the fellowship, it has made a world of difference. It has really been a game changer, especially because it's for my first three years. It has allowed me to explore research with a lot of freedom. I think that's how I came across my research agenda, specifically my recent projects on justice in AI. That's not something I would've been able to do if I was funded through a specific project.Are you involved in any student groups? What inspired you to get involved?  I owe the Community of Scholars program the biggest shoutout for their writing initiative. I've only engaged with them as a participant. I've used their small writing groups every semester since my second year. I’ve participated in their big writers' retreat and it was a game changer for my Ph.D. I owe them all of my writing success.What do you hope to contribute to the computer science community at the University?I do hope to contribute an interdisciplinary perspective. One of the big internal conflicts I had while applying to Ph.D. programs was thinking, “Do I even want to be in a computer science program?” My research would fit well in information sciences and sometimes in communications. I think being within the sphere of Human-Computer Interaction, which is in my sphere of interest, you see a lot of people who aren’t in computer science programs doing cool work that applies to computer science. What I hope to contribute is a sense of breath when we think about computer science and computational science. In my view, thinking of the ethics of what you are building with technology is part of being a computer scientist. I hope I can contribute, not only to that perspective but also scaffolding and tools for others to do that.Have you been involved with any research on campus?My main job is research, it's pretty much my 9-5. I’ve come out with a few publications since coming into the Ph.D. program. I do have a publication on Wikipedia about the Computer-supported Cooperative Work Conference. I publish pretty regularly at the Fairness, Accountability, and Machine Learning Conference. I have a publication that will be released soon for the 2024 annual conference. My research is on predictive systems, machine learning, and AI. It has recently shifted more towards AI since ChatGPT came out.What advice do you have for incoming computer science students?Everyone is allowed to be broad with what they consider computer science. The classes that help you become a computer scientist might not be in the computer science department. I think that's an OK place to be. I would like to encourage people not to pigeonhole themselves or their peers when they're thinking about what computer science is. I also encourage students to get into research.What are your plans after graduation?I do hope to stay somewhere in research, whether that is at academic insulation or a tech company. I haven't decided yet, but I genuinely enjoy research and I would love it if research is my 9 - 5.
cse.umn.edu
Reflecting on Consent at Scale
In the era of internet research, everyone is a participant. A PhD stood at the front of a crowded conference hall. They’d just presented their paper on social capital in distributed onl…
grouplens.org
Wordy Writer Survival Guide: How to Make Academic Writing More Accessible
As GroupLensers received CHI reviews back, many of us were told our papers were “long,” “inaccessible,” and even “bloated.” These critiques are fair. Human-Compu…
grouplens.org
Research
Secondary Stakeholders in AI: Fighting for, Brokering, and Navigating Agency
Leah Ajmani, Nuredin Ali Abdelkadir, Stevie Chancellor
ACM Conference on Fairness, Accountability, and Transparency - FAccT (2025)
Moving Towards Epistemic Autonomy: A Paradigm Shift for Centering Participant Knowledge
🏆 Honorable Mention for Best Paper [top 5%]
Leah Ajmani, Talia Bhatt, Michael Ann Devito
Proceedings of the Conference on Human Factors in Computing Systems - CHI (2025)
Whose Knowledge is Valued? Epistemic Injustice in CSCW Applications
Leah Ajmani, Jasmine C Foriest, Jordan Taylor, Kyle Pittman, Sarah Gilbert, Michael Ann Devito
Proceedings of the ACM on Human-Computer Interaction - CSCW (2024)
Data Agency Theory: A Precise Theory of Justice for AI Applications
Leah Ajmani, Logan Stapleton, Mo Houtti, Stevie Chancellor
ACM Conference on Fairness, Accountability, and Transparency - FAccT (2024)
What even is AI justice? Proposing a Theory of Justice at FAccT 24
In 2022, Politico reported that Crisis Text Line (CTL)–a non-profit SMS suicide hotline–-used one-on-one crisis conversations to train a for-profit customer service chatbot. In response, CTL …
grouplens.org
Peer-Produced Friction: How Page Protection on Wikipedia Affects Editor Engagement and Concentration
Leah Ajmani, Nick Vincent, Stevie Chancellor
Proceedings of the ACM on Human-Computer Interaction - CSCW (2023)
Page Protection: The Blunt Instrument of Wikipedia
Wikipedia is a 22 year-old, wonky, online encyclopedia that we’ve all used at some point. Currently (2023), Wikipedia has a dizzying amount of information in numerous languages. The English languag…
grouplens.org
Epistemic Injustice in Online Communities: Unpacking the Values of Knowledge Creation and Curation within CSCW Applications [Workshop Proposal]
Leah Ajmani, Mo Houtti, Jasmine Foriest, Nick Vincent, Michael Ann Devito, Isaac Johnson
Proceedings of the ACM on Human-Computer Interaction - CSCW (2023)
A Systematic Review of Ethics Disclosures in Predictive Mental Health Research
Leah Ajmani*, Stevie Chancellor*, Bijal Mehta, Casey Fiesler, Michael Zimmer, Munmun De Choudhury
*Both authors contributed equally to the paper
ACM Conference on Fairness, Accountability, and Transparency - FAccT (2023)
“I See Me Here”: Mental Health Content, Community, and Algorithmic Curation on TikTok
Ashlee Milton, Leah Ajmani, Michael Ann Devito, Stevie Chancellor
Proceedings of the Conference on Human Factors in Computing Systems - CHI (2023)
Ethical Tensions, Norms, and Directions in the Extraction of Online Volunteer Work [Workshop Proposal]
Hanlin Li, Leah Ajmani, Moyan Zhou, Nicholas Vincent, Sohyeon Hwang, Tiziano Piccardi, Sneha Narayan, Sherae Daniel, and Veniamin Veselovsky
Proceedings of the ACM on Human-Computer Interaction - CSCW (2022)
Engagement or Knowledge Retention: Exploring Trade-offs in Promoting Discussion at News Websites
Brian McInnis, Leah Ajmani, Steven Dow
Proceedings of the ACM on Human-Computer Interaction - CSCW (2022)
Reporting the Community Beat: Practices for Moderating Online Discussion at a News Website
🏆 Impact Recognition
Brian McInnis, Leah Ajmani, Lu Sun, Yiwen Hou, Ziwen Zeng, and Steven P. Dow.
Proceedings of the ACM on Human-Computer Interaction - CSCW (2021)