About Me

about me

Hi! I am a PhD candidate in Computer Science and Engineering at UC San Diego, advised by Dr. Margaret Roberts and Dr. David Danks. I do multidisciplinary research broadly in fairness, ethics, and machine learning, centered around user agency, primarily through a sociotechnical lens. My recent work encompasses philosophy, policy, and data science, using empirical and technically informed work to construct conceptually coherent frameworks for policy recommendations.

I am currently on the job market! I would like to continue conducting sociotechnical research for ethical deployment of AI/ML. My current projected graduate is June 2025.

Skills:
Languages:​ Python, Java, R, D3, Javascript, C, HTML/CSS, Unix, Matlab
Data Platforms:​ Amazon Web Services, Spark
ML Tools:​ Keras, TensorFlow, Scikit-learn
Applications:​ Final Cut, LaTeX, Adobe Photoshop, Lightroom, Solidworks, Arduino
Spoken Languages:​ Mandarin-Chinese (Intermediate-High on ACTFL/ETS proficiency scale)

Research Areas: Machine Learning, Human-Centered Design, Algorithmic Fairness and Recourse, User Agency and Accountability


Recent Updates:
  • February 2025: Presenting 2 posters at the GenAI Summit @ UCSD (Shaping Posibilities: A Framework for Reasoning about the Effects of Generative AI on Modal Beliefs and Trustworthines in Stochastic Systems: Towards Opening the Black Box)
  • September 2024: Decolonizing Tech: Interrogating the Impacts of Generative AI on BIPOC Communities Panelist at CMD- IT/ACM Richard Tapia Conference
  • August 2024: Invited Talk at Microsoft’s Aether Fairness & Inclusiveness Community Meeting: Beyond Behaviorist Representational Harms: A Plan for Measurement and Mitigation
  • August 2024: "The Illusion of Artificial Inclusion" featured as Editors' Choice at CHI 2024
  • June 2024: "Beyond Behaviorist Representational Harms: A Plan for Measurement and Mitigation" accepted at FAccT 2024
  • May 2024: "Recourse for Reclamation: Chatting with Generative Language Models" accepted to CHI 2024 Late Breaking Work
  • March 2024: "Recourse for Reclamation: Chatting with Generative Language Models" featured on Hugging Face's Daily Papers page

  • Publications

    1. Trustworthiness in Stochastic Systems: Towards Opening the Black Box
      Jennifer Chien
      , David Danks. (In Submission)

      [arxiv] [tl;dr]
      • AI systems are increasingly tasked to complete responsibilities with decreasing oversight. This delegation requires users to accept certain risks, typically mitigated by perceived or actual alignment of values between humans and AI, leading to confidence that the system will act as intended. However, stochastic behavior by an AI system threatens to undermine alignment and potential trust. In this work, we take a philosophical perspective to the tension and potential conflict between stochasticity and trustworthiness. We demonstrate how stochasticity complicates traditional methods of establishing trust and evaluate two extant approaches to managing it: (1) eliminating user-facing stochasticity to create deterministic experiences, and (2) allowing users to independently control tolerances for stochasticity. We argue that both approaches are insufficient, as not all forms of stochasticity affect trustworthiness in the same way or to the same degree. Instead, we introduce a novel definition of stochasticity and propose latent value modeling for both AI systems and users to better assess alignment. This work lays a foundational step toward understanding how and when stochasticity impacts trustworthiness, enabling more precise trust calibration in complex AI systems, and underscoring the importance of sociotechnical analyses to effectively address these challenges.
    2. Beyond Behaviorist Representational Harms: A Plan for Measurement and Mitigation
      Jennifer Chien
      , David Danks. FAccT 2024.

      [arxiv]
      • Algorithmic harms are commonly categorized as either allocative or representational. This study specifically addresses the latter, focusing on an examination of current definitions of representational harms to discern what is included and what is not. This analysis motivates our expansion beyond behavioral definitions to encompass harms to cognitive and affective states. The paper outlines high-level requirements for measurement: identifying the necessary expertise to implement this approach and illustrating it through a case study. Our work highlights the unique vulnerabilities of large language models to perpetrating representational harms, particularly when these harms go unmeasured and unmitigated. The work concludes by presenting proposed mitigations and delineating when to employ them. The overarching aim of this research is to establish a framework for broadening the definition of representational harms and to translate insights from fairness research into practical measurement and mitigation praxis.
    3. Recourse for Reclamation: Chatting with Generative Language Models
      Jennifer Chien
      , Kevin R. McKee, Jackie Kay, William Isaac. CHI Late Breaking Work (LBW) 2024.

      [Poster] [arxiv] [30sec Video] [90sec Video]
      • Researchers and developers increasingly rely on toxicity scoring to automate generative language model outputs, in settings such as customer service, information retrieval, and content generation. However, toxicity scoring may render pertinent information inaccessible, rigidify or ``value-lock'' cultural norms, and prevent language reclamation processes, particularly for marginalized people. In this work, we extend the concept of algorithmic recourse to generative language models: we provide users a novel mechanism to achieve their desired prediction by dynamically setting thresholds for toxicity filtering. Users thereby exercise increased agency relative to interactions with the baseline system. A pilot study ($n = 30$) supports the potential of our proposed recourse mechanism, indicating improvements in usability compared to fixed-threshold toxicity-filtering of model outputs. Future work should explore the intersection of toxicity scoring, model controllability, user agency, and language reclamation processes---particularly with regard to the bias that many communities encounter when interacting with generative language models.
    4. The Illusion of Artificial Inclusion
      William Agnew, A. Stevie Bergman, Jennifer Chien, Mark Diaz, Seliem El-Sayed, Jaylen Pittman, Shakir Mohamed, Kevin R. McKee. CHI 2024.

      [arxiv] [Video]
      • Human participants play a central role in the development of modern artificial intelligence (AI) technology, in psychological science, and in user research. Recent advances in generative AI have attracted growing interest to the possibility of replacing human participants in these domains with AI surrogates. We survey several such "substitution proposals" to better understand the arguments for and against substituting human participants with modern generative AI. Our scoping review indicates that the recent wave of these proposals is motivated by goals such as reducing the costs of research and development work and increasing the diversity of collected data. However, these proposals ignore and ultimately conflict with foundational values of work with human participants: representation, inclusion, and understanding. This paper critically examines the principles and goals underlying human participation to help chart out paths for future work that truly centers and empowers participants.
    5. (Unfair) Norms in Fairness Research: A Meta-Analysis
      Jennifer Chien
      , A. Stevie Bergman, Kevin R. McKee, Nenad Tomasev, Vinodkumar Prabhakaran, Rida Qadri, Nahema Marchal, William Isaac. In Submission.

      [arxiv]
      • Algorithmic fairness has emerged as a critical concern in artificial intelligence (AI) research. However, the development of fair AI systems is not an objective process. Fairness is an inherently subjective concept, shaped by the values, experiences, and identities of those involved in research and development. To better understand the norms and values embedded in current fairness research, we conduct a meta-analysis of algorithmic fairness papers from two leading conferences on AI fairness and ethics, AIES and FAccT, covering a final sample of 139 papers over the period from 2018 to 2022. Our investigation reveals two concerning trends. Paper authorship indicates a dominant US-centric perspective throughout fairness research, alongside a widespread reliance on binary codifications of human identity (e.g., ``Black/White'', ``male/female''). These findings highlight how current research often overlooks the complexities of identity and lived experiences, ultimately failing to represent diverse global contexts in defining algorithmic bias and fairness. We discuss the limitations of these research design choices and offer recommendations for fostering more inclusive and representative approaches to fairness in AI systems, urging a paradigm shift that embraces nuanced, global understandings of human identity and values.
    6. Fairness Vs. Personalization: Towards Equity in Epistemic Utility
      Jennifer Chien
      and David Danks. In Submission.

      [Slides] [Video] [FAccTRec 2023 Workshop Paper (Paper Selected for a Long Presentation)]
      • Personalized recommender systems offer a more efficient way to navigate the vast array of items available. However, alongside this growth, there has been increased recognition of the potential for algorithmic systems to exhibit and perpetuate biases, risking unfairness in personalized domains. In this work, we explicate the inherent tension between personalization and conventional implementations of fairness. As an alternative, we propose equity to achieve fairness in the context of epistemic utility. We provide a mapping between goals and practical implementations and detail policy recommendations across key stakeholders to forge a path towards achieving fairness in personalized systems.
    7. Algorithmic Censoring in Dynamic Learning Systems
      Jennifer Chien
      , Margaret Roberts, Berk Ustun. EEAMO 2023.

      [Poster] [arxiv] [ICML 2021 Workshop Paper]
      • Dynamic learning systems subject to selective labeling exhibit censoring, i.e. persistent negative predictions assigned to one or more subgroups of points. This results in groups of applicants that are persistently denied and thus never enter into the training data (bad!). In this work, we formalize censoring, demonstrate how it can arise, and highlight difficulties in detection. We consider safeguards against censoring – recourse and randomized-exploration – both of which ensure we collect labels for points that would otherwise go unobserved. The resulting techniques allow examples from censored groups to enter into the training data and correct the model (yay!). Our results highlight the otherwise unmeasured harms of censoring and demonstrate the effectiveness of mitigation strategies across a range of data generating processes.
    8. Recent Advances, Applications, and Open Challenges: Reflections from Roundtables at ML4H 2022 Symposium
      Stefan Hegselmann, Yuyin Zhou, Helen Zhou, Jennifer Chien, et al. White paper, 2023.

      • Summary of roundtable sessions at the 2nd Machine Learning for Health (ML4H) symposium was held both virtually and in-person on November 28, 2022, in New Orleans, Louisiana, USA (Parziale et al., 2022). This document compiles the takeaways from the roundtable discussions, including recent advances, applications, and open challenges for each topic. We conclude with a summary and lessons learned across all roundtables.
    9. Actionable Recourse via GANs for Mobile Health
      Jennifer Chien
      , Anna Guitart, Ana Fernández del Río, África Periáñez, Lauren Bellhouse. ML4H 2022.

      [Poster]
      • Mobile health apps provide a unique means of collecting data that can be used to deliver adaptive interventions.The predicted outcomes considerably influence the selection of such interventions. Recourse via counterfactuals provides tangible mechanisms to modify user predictions. By identifying plausible actions that increase the likelihood of a desired prediction, stakeholders are afforded agency over their predictions. Furthermore, recourse mechanisms enable counterfactual reasoning that can help provide insights into candidates for causal interventional features. We demonstrate the feasibility of GAN-generated recourse for mobile health applications on ensemble-survival-analysis-based prediction of medium-term engagement in the Safe Delivery App, a digital training tool for skilled birth attendants.
    10. ConsHMM Atlas: Conservation State Annotations for Major Genomes and Human Genetic Variation
      Adriana Arneson, Brooke Felsheim, Jennifer Chien, Jason Ernst. DOI: 10.1101/2020.03.01.955443

      [Poster]
      • ConsHMM is a method recently introduced to annotate genomes into conservation states, which are defined based on the combinatorial and spatial patterns of which species align to and match a reference genome in a multi-species DNA sequence alignment. Previously, ConsHMM was only applied to a single genome for one multi-species sequence alignment. Here, we apply ConsHMM to produce 22 additional genome annotations covering human and seven other organisms for a variety of multi-species alignments. Additionally, we extend ConsHMM to generate allele-specific annotations, which we use to produce conservation state annotations for every possible single-nucleotide mutation in the human genome. Finally, we provide a web interface to interactively visualize parameters and annotation enrichments for ConsHMM models. These annotations and visualizations comprise the ConsHMM Atlas, which we expect will be a valuable resource for analyzing a variety of major genomes and genetic variation.


    Professional Projects


    Awards and Honors

  • Graduate Fellowships for STEM Diversity (GFSD) Fellow (previously National Physical Science Consortium) (2021-2027)
  • UCSD CSE Doctoral Award for Contributions to Diversity (June 2022)
  • UCSD School of Global Policy & Strategy Science Policy Fellow (2020-2022)
  • Xilinx WIT University Grant Program Awardee (funds allocated to GradWIC, valued at $25K)
  • UCSD CSE Doctoral Award for Excellence in Service and Leadership (June 2021)
  • UCSD Jacobs School of Engineering Fellowship (2019-2021)
  • Honorable Mention in the 2021 National Science Foundation Graduate Research Fellowship Competition (April 2021)
  • Team selected to innovate in the CFPB Virtual Tech Sprint on Adverse Action Notices (Oct 5-9, 2020)
  • #LWTSUMMIT Scholarship (February 2020)
  • 2020 CRA-WP Grad Cohort for Women Workshop Scholarship (November 2020, attended May 2021)
  • Honorable Mention in the 2019 National Science Foundation Graduate Research Fellowship Competition (April 2019)
  • Rewriting the Code 2019 Fellow (2019-2020)
  • Best Visualization and Analysis (Best In Show) – 2018 BOW Datafest (April 2018)
  • Best Presentation Award – Intern Research Review Novartis Institute for BioMedical Research (August 2017)

  • Leadership

    I was the President of UCSD Graduate Women in Computing (GradWIC), a graduate student-run organization that primarily focuses on community building, outreach, and mentorship, from 2020-2023. I organized and led professional development workshops, graduate school application workshops, and managed scholarships for students to attend Grace Hopper, ACM Richard Tapia Celebration of Diversity in Computing, and CRA-WP Graduate Cohort Workshops. I was a speaker, facilitator, and organizer for many high school visits to campus, undergraduate students, and prospective MS and PhD students. Through this work, I created an innovative, informative, and inclusive environment; focusing primarily on uplifting and supporitng those who identify as women and historically underrepresented minorities in computing fields. I also served as the treasurer, raising over $70K throughout my time.

    I am a mentor in three UCSD-based mentorship programs:
  • GradWIC: Hena Ahmed (2023-2024); Manzhen Jing and Sanjey Sumathi (2021-2022); Feng Hou and Alisha Ukani (2020-2021)
  • GradPal: Kyeongyeon Lee and Langley Barth (2020-2021)
  • Jacobs Undergraduate Mentoring Program (JUMP): Erin Griggs (SWE President), Ava Real, and Dadian Zhu
  • (2020-2021)
  • and to many others in an unofficial capacity!


    Selected Leadership and Service Roles

    (see CV for full list)
  • Subverting Professionalism Workshop Leader (May 2023)
  • GradWIC x Kearny High School Field Trip Keynote Speaker (May 2023)
  • NCWIT Aspirations in Computing Award Ceremony Panelist (April 2023)
  • Junior Chair and Roundtable Discussion Co-Leader of Machine Learning for Health (ML4H) 2022
  • GradWIC x DEI x WIC Graduate Application Workshop Series Chair (4 Workshops)
  • Deepmind Queer in AI Workshop Panelist
  • CSE Representative for Graduate and Professional Students Association (GPSA) (2021-2022)
  • UCSD CSE DEI Committee Grad School: Why go, MS vs PhD, What to Expect Workshop Leader
  • UCSD Visit Day 2022 Event Coordinator, Student Panel Moderator and Panelist (March 2022)
  • UCSD CSE MS Orientation Graduate Student Moderator and Panelist (Sept 2021)
  • UCSD CSE PhD Orientation Graduate Student Moderator and Panelist (Sept 2021)
  • Bruins in Genomics (BIG) Summer Alumni Panelist, Mental Health Discussion Facilitator
  • WIC GBM #2 - Explore Computer Science Panelist (May 2021)
  • CSE DEI Committee Celebration of Diversity Mini-Series: Fitting in vs. Belonging Workshop Leader
  • Jacobs Undergraduate Mentoring Program (JUMP) Grad School Panel & Graduate Research Opportunities Panelist (April 2021)
  • UCSD CSE Visit Day 2021 DEI Panelist and Game Night (hosted by GradWIC) Event Organizer (March 2021)
  • UCSD Women's Center STEMinist Panel: Visualizing a More Equitable Future for Womxn in STEM (March 2021)
  • Society of Women Engineers: Envision 2021 Workshop Volunteer (February 2021)
  • GradWIC x WIC Graduate School Application Workshop Series (7 Workshops) (Fall 2020)
  • UCSD Women's Center Virtual STEM Student Organization Fair (December 2020)
  • UCSD CS PhD Information Session Moderator (November 2020)
  • UCSD Jacobs Graduate Student Council Incoming Graduate Student Information Session Panelist (October 2020)
  • UCSD CSE MS Orientation Graduate Student Moderator and Panelist (September 2020)
  • UCSD CSE PhD Orientation Graduate Student Moderator and Panelist (September 2020)
  • GradWIC How to Network Event Moderator (August 2020)
  • Bruins In Genomics (BIG) Summer Alumni Panelist (July 2020)
  • Discussion Facilitator at UCSD GSA 2nd Annual Mental Health Matters Symposium (May 2020)
  • Founder and Chair of Wellesley ACM-W Student Chapter (2018-2019)

  • Personal Projects

    In my spare time, I love creating, cooking, and bettering myself.

    Picnics in the Park

    Socially distancing, getting fresh air, and snacking in the park. Themed attire required, we pick a new theme every meal!

    Quarantine Glow Up

    Started doing pilates. Still miss lifting at the gym.

    Spam Musubi

    Made fresh every Sunday. Still trying to find the PERFECT combination of ingredients.
    Pictured: spam, seasoned rice, egg, and lettuce.

    Plant Parent

    Philodendron from Trader Joes is thriving at home. My room is a jungle. Watering days are workouts.

    GradWIC Merch

    Hand-embroidered GradWIC merch for my board members!

    Unplug in Yosemite

    Being off-grid, completely unreachable for about a week makes you realize how plugged in we truly are.


    Connect With Me

    Personal
    jjchien "at" ucsd "dot" edu
    Bluesky, Twitter, Scholar, LinkedIn


    UCSD GradWIC
    cse-gradwic-officers "at" eng "dot" ucsd "dot" edu
    Website, Facebook, Instagram